Security

CSO

What does it mean to build in security from the ground up?

As if secure design is the only bullet point in a list of software engineering best practices


Systems Approach As my Systems Approach co-author Bruce Davie and I think through what it means to apply the systems lens to security, I find that I keep asking myself what it is, exactly, that’s unique about security as a system requirement?

That question takes me back to a time before security became such a mainstream news topic; before security breaches were so common that we’ve become desensitized to the news of another one. Believe it or not, there was a time when Internet security was not on the public’s mind, and the task at hand was to raise awareness of the security risks the internet posed.

This was well after the Morris Worm made it painfully obvious how impactful a security incident could be. That was a wakeup call for the research community (myself included), who were at the time the only serious internet users. That experience (and others) eventually led to a concerted effort to educate the public about security. Two personal opportunities in the mid-2000s to get on a soapbox and talk about security come to mind.

The echo on the line was so bad it was hard to keep your wits about you

The first was an invitation to be a guest on Ira Flatow’s Science Friday. I haven’t been able to reconstruct the details — there were other “future of the internet” experts on the show — but I do remember my role was to talk about the risks of security incidents, and how we needed to rethink the internet architecture from the ground up to make it more secure. (This was at a time when PlanetLab, a networking research hub of which I was director, was getting a lot of press coverage as a laboratory for reinventing the internet.)

My only vivid memory of the experience is that I called into the Science Friday show over an ISDN line terminated in a sound recording room at Princeton, and the echo on the line was so bad it was hard to keep your wits about you.

The second opportunity took place at a Princeton Development Retreat at Pebble Beach. (In university vernacular, “Development” is a fundraising endeavor and, at Princeton, faculty are sometimes invited to talk about their research at events attended by wealthy alumni.) In this particular case, I teamed up with Tom Leighton, an alum and co-founder of Akamai, to talk about internet security.

Tom played the bad cop (telling everyone about the security threats exposed by the data Akamai collects) and I played the good cop (Princeton researchers to the rescue, building a more secure internet for the future). We must have made an impression because, shortly afterwards, we were invited to the White House to brief the Deputy National Security Advisor on internet security risks.

Again, my most vivid memory is about the most banal detail – in this case, how small an office the advisor had. I’m also pretty sure we weren’t telling him anything he didn’t already know.

Security as a motivator

What do I take away from these stories? For one, while I believe it is important for innovators to educate and inform the public about the implications of technology they are inventing, it is also clear that we (computer science researchers, the universities that employed us, and the government funding agencies that lobbied for appropriations so they could fund our research) were at least in part using security for its motivational value. There’s nothing quite like fear to get people to act.

That’s definitely one thing that’s unique about security as a systems requirement: Security is something that’s easily understood by the general public. But I wonder whether the “we need to build in security from the ground up” part of the message actually makes sense. It sounds good — and the alternative of “bolting security onto existing systems” was intentionally pejorative – but I’m not sure it’s a meaningful goal.

One reason to doubt that message — which gets at the heart of the “what’s unique” question — is that we have succeeded in building highly modular security mechanisms that can be reused by any and all applications. Kerberos and TLS are two great examples. The last thing we’d want is for every system or application to have to get the details of complex authentication protocols, key distribution protocols, and so on, right. Pushing for security "from the ground up" ought not to discourage the use of perfectly capable, preexisting modular security mechanisms.

What security-related issues you have to address on day one? Certainly if you expect a system to be multi-tenant, you have to define the level of isolation you want to maintain between tenants/users, and implement mechanisms that enforce it, from the ground up.

But I don’t consider isolation (or its counterpart, resource sharing) to be a security question, at least not in the ways we talk about security today. Early timesharing operating systems and filesystems were supporting isolation without directly considering potential attack vectors. Isolation was primarily about fair resource allocation and efficient utilization; naming and addressing were critical to enabling resource sharing; and privileged operations were limited to the kernel. Malicious attacks were generally not a “failure mode” under consideration. Design questions about isolation, privilege, and access control were expressed as “positive goals” that could be satisfied.

Today if you were to build a multi-tenant system you’d have to start with the same fundamental design issues — eg, identify the relevant principals and resources and specify who can access what — but then you would employ existing security mechanisms to protect the resulting system from known attacks.

This suggests that knowing about the state of the art in security mechanisms, and how to use them, is what it means to build in security from the ground up. It turns out that this is just one bullet item on a list of a general set of best practices software companies require their developers to follow.

If you're not familiar with such requirements, take a look at Microsoft’s Security Development Lifecycle (SDL) Practices, which is targeted at app developers who might deploy their services on Azure. I’d wager most software companies have similar, if not more stringent, engineering requirements for their own engineers to follow. I’d also wager the measures each company takes to ensure those rules are followed is highly variable.

The list is as applicable to sound software engineering in general as to security specifically, but the existence of security-focused lists like this suggest to me that security is unique in one way: The strong negative incentive that failure to secure provides. The failure modes are as unlimited as an attacker’s imagination, making security a “negative goal."

Personally, I’ve never found work that primarily involves keeping bad things from happening all that satisfying, but as I learned from my “soapbox” experiences, it is a strong motivator. ®

Larry Peterson and Bruce Davie are the authors behind Computer Networks: A Systems Approach and the related Systems Approach series of books. All their content is open source and available for free on GitHub. You can find them on Mastodon, their newsletter right here, and past The Register columns here.

Send us news
12 Comments

How nice that state-of-the-art LLMs reveal their reasoning ... for miscreants to exploit

Blueprints shared for jail-breaking models that expose their chain-of-thought process

US Cyber Command reportedly pauses cyberattacks on Russia

PLUS: Phishing suspects used fishing gear as alibi; Apple's 'Find My' can track PCs and Androids; and more

Bybit declares war on North Korea's Lazarus crime-ring to regain $1.5B stolen from wallet

Up to $140M in bounty rewards for return of Ethereum allegedly pilfered by hermit nation

Qualcomm pledges 8 years of security updates for Android kit using its chips (YMMV)

Starting with Snapdragon 8 Elite and 'droid 15

Microsoft expands Copilot bug bounty targets, adds payouts for even moderate messes

Said bugs 'can have significant implications' – glad to hear that from Redmond

Check out this free automated tool that hunts for exposed AWS secrets in public repos

You can find out if your GitHub codebase is leaking keys ... but so can miscreants

C++ creator calls for help to defend programming language from 'serious attacks'

Bjarne Stroustrup wants standards body to respond to memory-safety push as Rust monsters lurk at the door

Drug-screening biz DISA took a year to disclose security breach affecting millions

If there's something nasty on your employment record, extortion scum could come calling

Ivanti endpoint manager can become endpoint ravager, thanks to quartet of critical flaws

PoC exploit code shows why this is a patch priority

Malware variants that target operational tech systems are very rare – but 2 were found last year

Fuxnet and FrostyGoop were both used in the Russia-Ukraine war

GitLab and its execs sued again and again over 'misleading' AI hype, price hikes

Bosses bragged about Duo Chat bot, buyers weren’t buying it – claim

100-plus spies fired after NSA internal chat board used for kinky sex talk

National intel boss slams naughty nattering on work systems as 'egregious violation of trust'