Every penetration tester goes through several rites of passage on their path from lowly nessus monkey to experienced pentester. I wouldn't say that these are my favourites but they are surprisingly common. How many have you checked off the list?
Not being prepared
Whether it's a product of laziness or just being on back to back gigs for weeks on end, at some point (usually now and again) a pentester goes on-site completely unprepared for the task at hand. Is it unprofessional? Yes, of course.
Does it happen? Yes, of course. There are different forms of not being prepared, from simply not running the latest software updates through to the catastrophic situation of having pretty much zero knowledge of the test ahead and none of the equipment.
How well you handle this is something that will pretty much define your success as a tester. Normally _it's the client_ that's unprepared, so on that one occasion the customer has everything ready to go and you're still trying to find the proposal in your email somewhere is usually the time where everything goes wrong in the worst possible way.
The very worst form of not being prepared is when you're perfectly prepared and something truly horrible happens like a a last minute update hosing your entire build. There's a reason that we freeze all but critical updates days before a penetration test at Mandalorian, and that reason is the very bitter voice of experience itself.
Dressing incorrectly on siteMost pentesters will either wear a suit on-site or will wear something more casual for the server room like a polo shirt, black trousers and shoes. Occasionally there's a pentester who'll turn up in something wholly inappropriate like a tracksuit, or even worse. I consider this a variant of not being prepared, but there's nothing like the feeling of getting up early waiting for Asda to open so you can buy the least skankiest suit available before going on-site because you left your luggage hundreds of miles away at home. Yes, this happened to me.
Everyone hates on Nessus, it's a given. Everyone thinks Nmap is the best port scanner (which is only true for certain values of best, not all). Everyone hates on Windows, except for the people who hate on OSX more. At Mandalorian everyone hates on Libreoffice. Except me. I took the time to learn how to use it the Libreoffice way. Having said that, I'm not immune. The other day I was writing 3,000 words on traceroute for Breaking In and found out for the first time that most Unix traceroute implementations now support TCP static port traceroutes out of the box, after over 10 years of using the excellent Hping instead. Time spent learning how to get more from your existing tools is usually returned in spades.
Not getting to know their tools
So you've hacked the AD to shreds, compromised every aspect of the environment and left the place in total disarray. Great stuff. Now when you come to start writing up findings you notice something... missing. Before you know it you're reliant on things you remembered but didn't write down and things you swore you wrote down but can't find. At Mandalorian we have a standard file structure enshrined in our methodology specifically designed for the situation where someone gets hit by a bus and has to hand a gig over partway through. By keeping things in a standard location and logging as much practical information as possible you ensure that in six months time when asked about a finding you can give an accurate answer.
Not storing evidence
Don't. Just don't. At Mandalorian it's viewed so dimly that it's actually governed by the disciplinary procedure. Why? In every pentest you need to ask yourself, "What value am I adding here over and above an automated tool the end user could run themselves?" If the answer is none, then they should be running an automated tool themselves. You're not an automated tool, you're better than that. If you're not, then you should be.
Using scanner output in a report finding
Findings should be both concise and as detailed as they need to be. Understandably this can present a problem, the ideal solution to which is to point the reader elsewhere. At the very least you should use CVE references in infrastructure test reports, and either the OWASP Top 10 or something like CWE for web-based application tests.
Not providing references to further information
We've all done this one. It happens. You put a dot in the wrong place or something goes the wrong way. Check before you start testing, check during testing and check your results before finishing testing. I like to store my target lists in a text file for infrastructure tests, then reference the file. That way I know I'm at worst getting it consistently wrong.
Scanning the wrong IP address
It's great when you have some critical findings for the report, but somewhat embarrassing when they turn out to be for your own system. Make sure whatever you take on-site is hardened, and don't assume that you won't be attacked while on-site.
Including your own system in the results
Closing/Crashing without saving
If I could only pick a single entry in this list then this would be it, or possibly scanning the wrong IP address. It's not uncommon to start writing something, go off and do something else, then come back and update. All too often though, something causes a process or the entire test system to fall over and you're left hunting for swap files trying to find your test notes. Always save before executing anything that might fork out of control or generate serious load or i/o, but of course I'm preaching to the converted.
Join our newsletter
Subscribe now and get our latest blog posts, videos, tips and tricks every Thursday.