In-depth Q&A on federal cybersecurity writing
Earlier this fall, I did a Q&A session on my experiences writing for NIST. Last week I did a follow-up Q&A session that went into more detail on my NIST and FedRAMP writing. Christian Baer at Schellman kindly organized the latest Q&A for his colleagues working as assessors of federal agency cybersecurity requirements like FedRAMP, CMMC, FISMA, CSF, and SSDF. With their permission, I’m sharing edited highlights from the session.
Q1: What’s it like to draft or revise a NIST publication?
A: I’ve done a lot of writing outside of NIST, and the NIST process is one of the most challenging I’ve dealt with. There are so many things to take into account:
First, NIST often has mandates from Congress, the President, or others that they need to meet. So the work has to comply with those requirements.
The intended audience has changed over time. When I started supporting NIST, most everything was for federal employees and contractors. Now most NIST documents aren’t federal agency-specific. They’re used by all sorts of organizations around the world. Trying to make sure that the documents work for everybody is a unique challenge.
NIST is always under-budgeted, so writing documents that don’t become outdated quickly and need revisions becomes even more important.
When NIST creates guidance, it goes out for public comment, sometimes more than once. This gives everyone the opportunity to review the guidance and share their viewpoint. NIST gets a wide variety of feedback and then has to figure out how to address it. That can be quite a challenge. People have different viewpoints, and we respect them and take them into serious consideration—but ultimately, we need to be true to the purpose of the document.
It’s a lot of balancing acts, all the while taking into account the available resources. You can’t spend too much time on one publication, or you might not get to write another publication that’s also needed.
Q2: What are your thoughts on being prescriptive versus flexible when writing NIST publications?
A: That’s another tough balancing act. Part of the problem is that there are misconceptions that anything NIST produces is mandatory. And that’s never been the case. NIST is not a regulatory agency and does not have the authority to make things mandatory. Certain documents, FIPS, are mandatory by law. Some documents are made mandatory by OMB or other federal agencies, which is out of NIST’s control.
Especially because most NIST documents have to work for so many different audiences, they use a lot of “should” language. That’s intentional, because NIST can’t possibly anticipate every situation. Generally, “should” means that this is a good idea, something you should consider, but if it doesn’t work for you, that’s OK.
Some NIST documents use “shall” language, but they are generally defining algorithms or protocols. If you’re going to implement the protocol in accordance with the document, you must do things this way. There’s no “should” about it.
In terms of being more prescriptive, the top feedback I get from people is, “tell us what to do.” And I wish it was that easy. Because the guidance is used by organizations of all sizes across all sectors around the world, it’s impossible to give prescriptive, detailed guidance that applies for everybody. There’s way too much variation. And technology changes way too fast. So we tend to lean on the side of being flexible and put our faith in the readers of the documents to do the right thing. We all know that isn’t always what happens, but the alternative is to come up with standards or guidelines that are so rigid that nobody’s going to use them because they’re going to say, this is impossible, we can’t do all of this.
Q3: I had heard that NIST was hesitant to define low, moderate, and high baselines for SP 800-53 because that wasn’t the intent. Wasn’t the intent to use it as a framework, a control catalog? With federal baselines, controls aren’t removed, they’re added.
A: I completely agree with the concerns about defining things like low, moderate, and high. In the mid-2000s I helped develop the Common Vulnerability Scoring System (CVSS) version 2. It provides a severity score for each software vulnerability. A lot of organizations have used it to prioritize their patching, like having a policy that says every vulnerability with a score of 7 or higher has to be patched within 30 days.
That was never how CVSS was supposed to be used. CVSS had a base score, which reflected the characteristics that were unlikely to change over time. Base scores are pretty much what every CVSS score you’ve ever seen is. NIST created the National Vulnerability Database in part to publish those scores so everybody in the world could use them. CVSS also defined temporal scores that were time-sensitive and environmental scores that would be specific to an organization’s environment. The intention was that organizations would use all three types of scores. They’d take that base score and apply the additional factors to it to come up with scores that were meaningful for their environment. Unfortunately, hardly any organizations do that; they only use base scores because of the additional resources needed to calculate and update temporal and environmental scores.
A lot of times, people have the best of intentions with things like creating baselines or developing metrics and thresholds. But the users of the baselines, metrics, and thresholds often don’t use things the way that the creators intended.
Q4: What’s a piece of advice you’d give someone reading a large NIST publication for the first time?
A: Expect to read it more than once. Obviously, if you’ve got a 500-page document, you’re not going to sit down and read it cover to cover and comprehend it all. I would advise reading such a document a chunk at a time. It’s no different than reading a large textbook in school. Be prepared to take notes, highlight things, go back and re-read things, and go to other documents to get more information because NIST intentionally tries not to duplicate material across publications. For example, if you’re reading SP 800-63-4, it points to other documents. Now you need to find those documents and read them too, at least parts of them.
Q5: Have there been any surprising disagreements or enlightening moments during the development of a publication that changed your view on a security issue?
A: Several years ago, we were updating SP 800-40 on patch management. This was through the NIST NCCoE, so we were working with engineers from Microsoft and other tech companies who release patches and help their customers prioritize patches. They have a great deal of experience on that side of things. One of them essentially proposed that we stop worrying so much about prioritizing individual patches. Instead of doing things like assigning a vulnerability score to each patch and then setting the timeframe for addressing patches by score, you have a regular schedule for applying patches unless a patch has exceptional circumstances, like a major zero day that needs patched immediately. You treat patching as technology preventative maintenance, like preventative maintenance on a car.
The first time I heard that, I thought, that’s crazy, just throwing away all the hard work that’s gone into patch and vulnerability metrics and prioritization. I struggled with that for a while, but the more I read about it and the more we talked about it, I ended up switching to that point of view. I’m now a big advocate for simplifying the prioritization process. Instead of spending so much time and energy trying to prioritize all these vulnerabilities, instead focus on improving the mitigation processes.
A second example brings together NIST and FedRAMP. One of the policies that I worked on last year for FedRAMP dealt with the conflict where FedRAMP was telling cloud providers that they have to patch in a certain amount of time, and NIST was telling cloud providers that they have to use NIST-validated cryptographic modules. These policies conflict when a provider needs to patch to comply with FedRAMP but the patches aren’t yet NIST-validated.
Both agencies had created their policies in support of the laws they are subject to. So how do you come up with a solution that resolves the conflict and provides the best security outcome, while recognizing that these contradictory laws are in place that we’re all being asked to follow?
There are times like that where you truly see both sides of the issue. You understand that each side has its obligation and its mission. But you need to do fundamentally what’s best from a security perspective. And I think in the end, the new policy we created was as helpful as we could be in clearing up the conflict.
Q6: Do you foresee the role of assessors changing as NIST guidance changes?
A: Things sure used to be a lot easier. When I was first working for NIST, we did a lot of checklists, like Windows XP security. Here’s the settings, here’s what the recommended values are for the settings, and that was pretty much it. That seems quaint now. Now we have these incredibly complex systems with all these third-party services and components, and just figuring out where the system boundary is for FedRAMP purposes is crazy enough, much less figuring out how to assess it.
I assume that the role of assessors is going to keep getting tougher. They need to have greater and greater understanding of a wider variety of technologies, how they work together, and how the security controls do or don’t carry across those intersections of those technologies. I don’t envy assessors their jobs at all.
Q7: Of all of the publications you’ve contributed to, which one are you the most proud of, and why?
A: I’m the most proud of NIST SP 800-61, the incident handling guide. That was the first NIST publication that I wrote back in 2003. At that time, most organizations didn’t have any incident response capabilities, programs, or policies. I’d previously worked in a security operations center, reviewing intrusion detection alerts and aiding in clients’ incident responses. When I wrote the incident handling guide, I combined my experience with the limited information out there from CERT and a few other organizations, melded those concepts and fleshed them out a bit, and came up with a guide for organizations just starting out in incident handling.
And it took off. It ended up becoming this foundational document that was cited close to a thousand times. Then I was fortunate enough over the years to assist NIST with all the updates. We just released revision 3 earlier this year, which made the shift to a CSF 2.0 profile. Even with all those changes, the basic concepts from 2003 are still there. The iterations of SP 800-61 have been some of NIST’s most downloaded documents, so I’m really proud not only that I created the original, but that all these years later, the updates are still in use, and people are still finding value in them.

