I do it a bit differently. If there are any glaring (critcal) things that need to be fix I hit on them in the summary prior to the introduction.
In the introduction I include details of the people who worked on the project on the client side. The report needs to live on its own. If they come back and look at the report they can find out who interally were the system admin, project coordinator (etc) contacts.
Their report totally skips the methodology and crams that in with the findings.
I also have a completely separate findings section detailing the following:
Level of risk (Low Med High)
Exploitation Likelihood (Low Med High)
This gives the sys admins a checklist to work off of to fix things. Selecting Low, Medium, or High for the Risk and Likelihood takes some serious thought. The risk may be harder to quantify in a black box test where you don't know what is around that box. Also, you can't just give everything a rating of high. You have to prioritize. The overall risk is based on the Level of Risk and Exploitation Likelihood and uses a matrix similar to this: http://www.dwi.gov.uk/regs/service/fig4a.gif
I can't find the one the NSA uses, but that is the one I use. The one shown above is similar and hopefully gets my point across.
One final piece of chrome. I highly suggest using the cross-referencing feature of your word processor. You can add piece that say see BLAH
and have it fill in the text and work as a link in your pdf viewer. It is a small touch, but demonstrates your attention to detail. It also helps a bit since I break up my sections differently.