Of course they aren’t!
But despite the continued parade of major information security breaches by companies large and small, this title (Betteridge’s law anyone?!) is justified by the amount of organisations that still rely upon penetration tests as their only line of defence against security vulnerabilities. This isn’t a good idea.
The problem with relying on penetration tests.
Relying on penetration tests as your sole cyber-security control is like assessing the health of the Titanic using a photo taken from the iceberg.
A penetration test is unable to give you any indication of what’s coming in the future because it’s static, point-in-time, and outside-in (just like that photo!). It has a limited value in understanding health.
Penetration tests also have costs. Both the direct costs of paying for them (most organisations seem to use third-parties for this) but maybe more significantly – because they almost always act as approval gates – they can actually block value from being regularly, and frequently, delivered to your customers.
This isn’t to say that penetration tests have zero-value, they are a useful tool as part of a wider set of security controls – but their use should be determined through deliberate security risk assessment and threat modelling.
What we recommend instead.
The primary recommendation is to operate a system of continuous security as described in our Secure Delivery Playbook (published under a Creative Commons licence!)
However – we understand that simply telling you where you should be, isn’t always helpful. We also like to help our clients understand where they currently are.
If you feel you have a Security Blind Spot – i.e. products, already in use, that you think may have been delivered without active engagement from an infosec team, and you lack confidence that they have sufficient security controls in place – then there is another approach that we can share.
Security Health Check
The objective of a security health check is, in a very objective way (sympathetic to the culture of a “blameless post-mortem”), to ensure that the team, the stakeholders, and the organisation are aligned in their understanding of the current state of security controls around a system/product.
- Celebrating the good work involved in the security controls that are already in place (often we find that a lack of confidence is caused by a lack of visibility, rather than a complete lack of controls)
- Taking a risk-based approach to prioritising which controls should be next to be added
- Providing practical tools and advice to the engineering team to help them most easily adopt new controls and practices
The Nitty Gritty
The general approach that we take, and are happy to share, is as follows:
1. Gather Review Team/Squad
It’s important that those expecting to contribute to the review are explicitly identified. What you need will depend on the Context and Scope agreed (see below) but we have found that we typically need 2-3 people dedicated to the review for a period of 5-10 days – as well as time from the development team. Those doing the review should be “external”. This doesn’t necessarily mean third-party, it could be sourced from inside your own organisation if you have the requisite skills – but it should be independent from both the product development team and the infosec individuals who are currently responsible for this part of the portfolio.
2. Establish System Context
Identify the key stakeholders and meet with them to understand their vision and needs.
Security controls can only be identified and prioritised in the context of an actual system, organisation and business usecase (which is why off-the-shelf security scans have limited value).
3. Agree Scope of Review
Be clear about the specific areas of your systems and processes that were under review, to ensure you (a) have enough time and (b) have the right expertise available, to achieve a comprehensive and technically deep review.
4. Gather Information/Data
The specific activities you need to carry out should vary depending on the context and scope you’ve established above – but an example from a recent health check we carried out is:
- Asset and Data Identification – assets, functions, sensitivity and business impact if compromised.
- Architectural Security Review – flow of data, authorised principals, component authentication, access controls, data protection, and monitoring.
- Cloud Security Compliance Scan – automated scanning tools to check configuration of the team’s cloud estate against industry benchmarks (CIS).
- Secure SDLC Maturity Review – software delivery lifecycle (SDLC) workflows: release management, security reviews, scanning tools and system operation (against industry benchmark).
5. Play Back Findings
We find it useful to separate the findings into 4 sections (depending on your needs and the team carrying this out these could be more or less formal):
- Overall Security Findings Report – Detailing review processes, inventory of assessed systems and principals, a matrix of identified issues and their corresponding business impact.
- Security Program Maturity Report – Report identifying current secure delivery maturity, including recommended approach to reach an agreed maturity level in terms of processes, policies and security tools.
- Compliance Benchmark Recommendations Report – A triaged report showing compliance with benchmarks such as CIS, including recommended remediation path based on impact.
- Architectural Control Recommendations Report – A set of security requirements and implementation recommendations to remediate any large-scale architectural findings, allowing engineers to understand the impact of control deficiencies and a recommended approach to remediation.
A structured security health check reduces your risk by ensuring that all parties have an aligned and up–to-date view of the state of current security controls around a system or product. It will also provide a tangible plan of pragmatic remedial actions which is well understood by both stakeholders and delivery teams.
If you have concerns about any of your systems/products then please get in touch.
There are more resources available on our Secure Delivery hub.
As we emerge from the pandemic, for many businesses the biggest concern isn’t being too bold – it’s being too cautious.
Business leaders are looking to accelerate transformation and deliver ambitious new services that are invariably delivered through technology. IT leaders are in the hot seat, and that’s a worry if you’re not 100% confident in your data.
Can you guarantee that data quality meets requirements? Do you have the systems and skills to integrate data from multiple platforms, silos and applications? Can you track where data comes from, and how it is processed at each stage of the journey?
If not, you’ve got a data governance problem.
Without strong, high-quality governance, organisations are at the mercy of inaccurate, insufficient and out-of-date information. That puts you at risk of making poor decisions that lead to lost business opportunities, reputational damage and reduced profits – and that’s just for starters.
What does high-quality data governance look like?
It’s likely that the IT department will own data governance, but the strategy must be mapped to wider business goals and priorities.
As a rough guideline, here are 10 key things that we think must be a part of an effective data governance strategy:
- Data security/privacy: do we have the right measures in place to secure data assets?
- Compliance: are we meeting industry and statutory requirements in areas such as storage, audit, data lineage and non-repudiation.
- Data quality: do we have a system in place to identify data that is poor quality, such as missing data points, incorrect values or out-of-date information? Is such information corrected efficiently, to maintain trust in our data?
- Master/Reference data management: If I look at data in different systems, do I see different answers?
- Readiness for AI/automation: If we are using machine learning or AI, do I know why decisions are being made (in line with regulations around AI/ML)
- Data access/discovery: Are we making it easier for people to find and reuse data? Can data consumers query data catalogues to find information, or do we need to find ways to make this easier?
- Data management: Do we have a clear overview of the data assets we have? This might require the creation of data dictionaries and schema that allow for consistent naming of data items and versioning.
- Data strategy: What business and transformation strategy does our data support? How does this impact the sort of decisions we make?
- Do we need to create an operating model so the business can manage – and gain value from – this data?
Moving from data policy to data governance
As we can see, data governance is about more than simply having an IT policy that covers the collection, storage and retention of data. Effective, high-level data governance needs to ensure that data is supporting the broader business strategy and can be accessed and relied upon to support timely and accurate decision-making.
So how do IT leaders start to move away from the first view of governance to the latter? `
While it can be tempting for organisations to buy an off-the-shelf solution for data governance, it’s unlikely to meet your needs, and may not align with your strategic goals.
Understanding your strategy first means the business can partner with IT to identify the architecture changes that might be needed, and then identify solutions that will meet these needs.
Understanding Lean Data Governance
Here at Equal Experts, we advocate taking a lean approach to data governance – identify what you are trying to achieve and implement the measures needed to meet them.
The truth is that a large proportion of the concerns raised above can be met by following good practices when constructing and operating data architectures. You’ll find more information about best practices in our Data Pipeline and Secure Delivery playbooks.
The quality of data governance can be improved by applying these practices. For example:
- It’s possible to address data security concerns using proven approaches such as careful environment provisioning, role-based access control and access monitoring.
- Many data quality issues can be resolved by implementing the correct measures in data pipelines, such as incorporating observability so that you can see if changes happen in data flows, and pragmatically applying master and reference data so that there is consistency in data outputs.
- To improve data access and overcome data silos, organisations should construct data pipelines with an architecture that supports wider access.
- Compliance issues are often related to data access and security, or data retention. Good implementation in these areas makes achieving compliance much more straightforward.
The field of data governance is inherently complex, but I hope through this article you’ve been able to glean insights and understand some of the core tenets driving our approach.
These insights and much more are in our Data Pipeline and Secure Delivery playbooks. And, of course, we are keen to hear what you think Data Governance means to your organisation. So please feel free to get in touch with your questions, comments or additions on the form below.