White paper: Setting and Achieving Security Design Goals

This white paper provides a framework on system security design and the security software needed to achieve a system’s security goal.  Four software models are examined and critiqued, with recommended strategies for choosing vendors within those differing models.

Reasons for Evaluating Security Software

The process of evaluating security software is complex for several reasons.  First, the technical pros and cons of the solutions you are considering may be difficult to enumerate without advice from seasoned security experts.  Your organization may not have this expertise.

The solution with the lowest initial cost may have substantially higher development and support costs over the entire product lifecycle.  The free open-source solution may end up costing more over time than itsPicture1 commercial counterpart.

There could be substantial liability issues if the software you choose is compromised.  There have been numerous high-profile security breaches over the past several years which have been very damaging to the affected companies.  New legislation, particularly the “General Data Protection Regulation” (GDPR) in Europe, greatly increases manufacturers’ and service providers’ financial exposure if a security breach exposes their customers’ private data.

Lastly, the service model for IoT requires a resilient system so that a deployed device does not require field maintenance.  If an IoT system gets “bricked” by a cyberattack, fixing it may be impossible or prohibitively expensive.  Strong security and recoverability can provide powerful resilience properties to a new generation of IoT devices.


Ensure Accountability Now Rather than Trouble Later

Safety and security share a lot of the same characteristics in product design. If implemented perfectly, they will do their jobs unnoticed, until something goes wrong. Customer demand drives new features and lower prices, often forcing security and safety onto the product design “backburner”. IoT device manufacturers need to take the long view because that is when the security and safety survivability of the system will be proved – or disproved – sometimes with disastrous consequences.

The Harvard Business Review illustrates this scenario with an example from the car industry:

Although managers’ bonuses are based partly on vehicle-quality improvements, and safety is supposed to be paramount, cost is “everything”…    the company’s atmosphere probably discouraged individuals from raising safety concerns… a former… manager described a workplace in which the mention of any problems was unacceptable.

To address security effectively, consider the costs over the entire life of the product.  Everyone knows security is paramount, but it is often difficult to design security into the system while controlling costs.   Where possible, reduce overall security costs by choosing technologies that can meet your requirements and are cost effective. Investment in better security could pay big dividends in the future. As the famous FRAM auto filter television commercial told us:

You can pay me now or you can pay me later.”


Getting Security Requirements and Design Right

System security is more of an evolving journey than a destination. The security you design today will not be perfect, but you should have a disciplined process for the system’s security requirements, implementation and testing. Keep records to show that you followed your security processes – you may need them later to show your “due diligence.” The following steps can be taken to get the right security for your product.

  1. Get a clear view of your security goals and requirements.  Convince yourself they are adequate for the type of system being designed and the threat landscape in which it must survive for the projected product lifetime. Question your assumptions.
  2. Get those security goals and requirements vetted by an expert.  Security is a highly specialized field and shouldn’t be left to inexperienced employees.
  3. Decide which security pieces will be done in-house and will be outsourced for costs and/or expertise.  Sometimes it is far cheaper to utilize external experts than to develop and maintain security skills in-house.
  4. Design security features into your project from the beginning of the design.  Don’t wait until the last minute to incorporate and test security features. Security applied at the end of a design usually leads to unsatisfactory results.
  5. Include a method of securely updating software to deployed devices. Make sure you strongly authenticate devices for the update and put strong protections on the public portion of the signing keys so that attackers cannot trick you into accepting their malware.  New vulnerabilities will be discovered, so you must have a way of mitigating those threats.

Here’s a short list of modern cybersecurity fundamentals that often get missed.  Use these as a quick check of how you’re doing:

  1. Are you storing critical keys in “the clear”, for example in the filesystem, rather than protecting them with a hardware security device?
  2. Can your product establish “strong identity”?  Is there a private key used to identify your system that is sequestered in hardware and cannot enter system memory where it can be compromised? Note that a MAC address is not strong identity!
  3. Does your product have a self-defending Trusted Computing Base from which your system can recover from an attack?
  4. Can an external server challenge your device and get a reliable device security health check?  If malware penetrates your system, can such an external health check be trusted to still be accurate?
  5. Do you have a good entropy source for key generation?
  6. Can you detect rootkits and bootkits when you boot your system?
  7. Do you have a secure storage area requiring special hardware and authorization mechanisms in which to store sensitive and protected data objects such as lists of public keys allowed for signed code updates?

Many systems will fail this checklist.

Four Software Models

You’ll need to make choices on what security software to use as you design your systems.  You’ll need to decide what you’ll develop internally and what you’ll acquire externally. The remainder of this whitepaper is intended to help with making choices on developing or acquiring reliable high security software.  Essentially, the software you use to build your systems fits into one of four models:

blog photo

  • Do It Yourself 
    Typically, your core software differentiates your system from competitors.  Your intellectual property (IP) and distinctive features drive customer demand. The code is either created by your programmers or contractors.  Most companies focus their efforts on this code and acquire other software from external sources.
  • Closed Source Code – Non-Standard API
    Software in this category is entirely proprietary.  The APIs are developed by its supplier and are not open standards.  The underlying implementation is not visible to the end user. Code is provided to end users in binary form ready for use.  Many customers shy away from closed source code with non-standard APIs because they don’t want to be “locked in.” They want to be able to shop for better prices and they want the option of getting their solutions from alternative suppliers without having to redesign their software to a new set of APIs.

        “Many development firms try to sell their proprietary systems so they can lock in clients.”

A big advantage of this model is that those providing this software, if successful, have a lot of expertise and are generally well-funded so they can provide good, long-term support and  high product quality.

  • Closed Source Code – Standardized API
    Software in this category is partially proprietary.  The APIs are developed by recognized technical consortiums or governmental groups and are considered open standards.  The underlying implementation is usually not visible to the end user. Code is provided to end users in binary form ready for use.  When code is based on a standard API provided by a specification, the customer can potentially acquire the code from multiple sources.  Essentially, this provides a compromise between the “Closed Source – Non-Standard API” and “Open Source” models potentially offering the best of each model.  The open source community argues that open source leads to greater quality and security of code, but this is often not the case. On the other hand, closed source code is not guaranteed to be of high quality and high security.  Choosing a good partner with closed source code is key. If you have a good partner, they can give you economies of scale, do extensive testing, attend standards meetings and maintain the needed detailed expertise to maintain the code over its entire lifecycle.

A Checklist to Help with the Choice of a Security Software Partner

Here are some things to consider when you choose a security software partner to provide you with code based on standardized APIs:

    1. How long has that provider been providing this code or previous generations of the standard and code?
    2. Has the supplier had major security breaches?
    3. Can the supplier help you with support and special needs for your product?
    4. Does the provider participate in the creation and maintenance of the standard?
    5. Does the provider do comprehensive code testing?
    6. Does the provider maintain code for a wide variety of platforms?
    7. Does the provider supply the code to major manufacturers and service providers?
    8. How long has the provider been around?
    9. Does the provider have technical breadth and depth?

(A case study using this checklist is provided at the end of this paper.)

  • Open Source Code
    Open source software is provided as source code which end users can access and compile.  It is often also provided as executable code compiled to one or more target operating systems and CPU architectures.   The licenses on open source vary broadly. The most liberal open source licenses allow you to create derivative works while merely acknowledging from where you got the original code.  Others require you to give back any source code modifications to the original open source repository. These licenses are often referred to as “copy-left” licenses. Some licenses require you to also turn over as open source any code that you use or create to link to the open source code provided.    Open source code may or may not attempt to comply with applicable software API standards. Most open source is provided with the caveat “No guarantees are made with this code. Use at your own risk. If you have problems, you are on your own.” Often, open source is thought of as “free software”, but the reality is more complex.

“The open-source model is a decentralized software-development model that encourages open collaboration.[1][2] A main principle of open-source software development is peer production, with products such as source code, blueprints, and documentation freely available to the public….    Open-source code is meant to be a collaborative effort, where programmers improve upon the source code and share the changes within the community.”

Open source code is only as good as the work done by the community that supports it.  In many cases, these communities are undermanned, underfunded and short-lived. Using open source code without scrutinizing and participating in the “peer production” of that code can lead to problems.  The OpenSSL Heartbleed security exposure was a case study in this problem.

“…vendors just regarded OpenSSL as a useful bolt-on to their hardware products — and, since it was open source, assumed other people were examining the code. ‘Everyone assumed other eyeballs were looking at it. They took the attitude that it was a million other people’s responsibility to look at it, so it wasn’t their responsibility’…  ‘That’s where the negligence comes in from an open source angle.”

Just because open source is available to be scrutinized doesn’t mean it is being scrutinized.  It should be noted that scrutinization is not formal code testing. In many open source projects (especially those in user space – outside of OS kernels), the testing and peer scrutiny is inadequate and not maintained over the entire code lifecycle.  Additionally, even if peer review is occurring, problems can still be very difficult to detect based on the subtlety of the problem and the expertise level of those doing the review.

“The OpenSSL Heartbleed fiasco proves beyond any doubt what many people have suspected for a long time: Just because open source code is available for inspection doesn’t mean it’s actually being inspected and is secure.  It’s an important point, as the security of open source software relies on large numbers of sufficiently knowledgeable programmers scrutinizing the code to root out and fix bugs promptly. This is summed up in Linus’s Law: ‘Given enough eyeballs, all bugs are shallow.”

Experience has shown that open source communities have been very good at innovation and creation.  Review and testing is often problematic. The closer the open source code is to the operating system kernel, the larger the communities are and the code exhibits a proportionally higher quality.  As the code moves farther from the kernel, the peer review, test and support get much sparser. When considering what code to incorporate into your system, this needs to be very carefully considered.

AdobeStock_139082292.ai [Converted] copy

Where Do You Want to Invest Your Resources?

If you’re going to use open source software, best practices suggest that you need a team of people assigned to analyze prospective open source code, participate in open source communities, do testing, and potentially take it over if the community collapses.

Before implementing any open source software, it is imperative to perform a thorough evaluation to assess any flaws or risks that may potentially arise. This will help you invest in the most stable solution for your needs and reduce the risk of vulnerabilities cropping up down the line. Your development team should be deeply involved in this process, looking at the history of the open source project to identify any past issues and assess the likelihood of further problems in the future.”

Security programmers are expensive, in short supply and possess a high level of skill.   It seems the wiser choice to free your programmers to work on code that differentiates your product while acquiring more standardized components that require a lot of expertise and ongoing maintenance from trusted business partners.

Case Study

Choosing a TCG Software Stack for Use with Trusted Platform Modules

The TCG Software Stack (TSS) for TPM 2.0 is middleware to enable applications to share the Trusted Platform Module (TPM).  The TPM, as specified by the Trusted Computing Group public/private consortium, is an inexpensive but complex hardware root of trust.  There are approximately 2000 pages of standards specification for TPM 2.0. Users generally find it easier and more efficient to acquire the TSS 2.0 middleware rather than attempting to write it themselves.

OnBoard Security is a supplier of TSS 2.0.  The proposed checklist is filled out for OnBoard Security here:

  1. How long has that provider been providing this code or previous generations of the standard and code?
    OnBoard Security was previously the Embedded Systems Unit of Security Innovation. OnBoard Security (then Security Innovation) provided a successful commercial version of the TCG Software Stack v1.2 (TSS 1.2) to the industry nearly 10 years ago and is still available and in use today.
  2. Has the supplier had major security breaches?
    OnBoard Security has been providing TSS 1.2 and TSS 2.0 code to customers without security incidents for almost the last decade.
  3. Can the supplier help you with support and special needs for your product?
    Yes.  OnBoard Security has longstanding and substantial expertise in trusted computing to help solve any unique issues that may arise.
  4. Does the provider participate in the creation and maintenance of the standard?
    OnBoard Security is a member of the Trusted Computing Group (TCG) and the chair of the TCG Software Stack 2.0 Workgroup.
  5. Does the provider do comprehensive code testing?
    OnBoard Security does comprehensive TSS 1.2 and TSS 2.0 testing – testing both positive and negative test cases.  OnBoard Security uses Klocwork static analysis tools from Rogue Wave to do static analysis for security and check for compliance with the MISRA-C coding standard.
  6. Does the provider maintain code for a wide variety of platforms?
    OnBoard Security’s TrustSentinel TSS 1.2 and TSS 2.0 are provided for Linux, Raspian and Windows.  Other operating systems are supported on customer request.
  7. Does the provider supply the code to major manufacturers and service providers?
    OnBoard Security supplies TrustSentinel TSS 1.2 and TSS 2.0 to major manufacturers today.
  8. Does the provider have technical breadth and depth?
    OnBoard Security provides products for post quantum cryptography and automotive Vehicle-to-Everything Communications security in addition to the trusted computing middleware.  OnBoard Security has been at the core of TCG TSS standards development and software production, maintenance and testing for many years.

Gene Carter

Newsletter Subscribe