DevSecOps: Balancing speed, security and user experience

How can organisations leverage DevSecOps to create a customer-centric approach?

The landscape of modern business operations demands agility, reliability, and security in equal measure. While cybersecurity remains a critical concern, the integration of DevSecOps practices has emerged as a pivotal strategy for organisations seeking to fortify their digital infrastructure while accelerating innovation.

At the heart of DevSecOps lies a transformative ethos: the seamless convergence of development, operations, and security functions. This integration isn’t merely about thwarting threats but fundamentally reshaping how teams collaborate and deliver value. It’s a cultural shift that champions iterative development, continuous integration, and rapid deployment, all while safeguarding against potential vulnerabilities.

Recently, I had the opportunity to discuss DevSecOps as part of the Konnecta Ko-Lab Series 2 event in Sydney. At the event, I discussed how organisations can embed DevSecOps practices and the importance of creating a customer-centric culture.

Supporting engineering teams to adopt DevSecOps

One of the biggest barriers to successful DevSecOps initiatives is the entrenched processes within organisations. Most companies operate in silos, where the success of each team is measured by domain-specific metrics. Development teams may be measured on the speed of feature delivery, product teams on net promoter scores (NPS), and security teams on incident response time.

To overcome these barriers, organisations must cultivate a unified vision that places equal emphasis on feature delivery, operational efficiency, and security. 

While everyone agrees in principle that it is important to build secure and reliable systems, for many organisations there are few immediate and obvious incentives to do so.

Creating a shared vision ensures that every team member embraces their role in delivering secure, reliable, user-centric solutions. Without this, every initiative will trade off non-functional and security requirements first. You can find out more about how this can work in practice in our Secure Delivery Playbook.

User-centred security

Encouraging teams to integrate security within their delivery practices can be aided by focusing more on users. While user-centric design practices are becoming increasingly common in organisations, user-centric security hasn’t yet gained the same prominence. 

Concepts such as compliance, governance, and corporate risk are incredibly important to consider during delivery but rarely resonate with everyone in the business who has a say on how work should be prioritised.

When a cyber-attack occurs, it can result in service interruptions, reputational damage or financial penalties for a company. But customers aren’t merely bystanders in the event of a cyber-attack; they’re the true victims. 

They are the person whose bank account was blocked as a fraud prevention measure, they are the person who couldn’t book an important appointment because the system was unavailable.

Framing the challenges and outcomes in this way helps all team members see security-related processes as a priority, rather than a blocker or an afterthought.

Balancing security and delivery speed

One of the key questions at the Konnecta event focused on how organisations can balance DevSecOps with delivery speed – whilst staying “ahead of the curve” on cyber security threats.

This is a challenging problem. Cybersecurity is a truly adversarial discipline – and it is a situation which is completely asymmetrical. An attacker has a known set of methods that they can attempt, and they need to win once. A defender has to protect against the unknown and must win every time.

Traditional information security values, including defence in depth, least privilege, MFA, and threat detection are vital. More modern DevSecOps practices can strengthen your security posture:

  • Shift left: Conduct security testing sooner in the software and application delivery cycle.
  • Immutable infrastructure: where infrastructure components, once deployed, remain unchanged throughout their lifecycle, promoting consistency and automation
  • Sensible defaults and paved roads: Create defined approaches for common use cases and create intentional friction when people stray from the path.
  • Regular threat modelling: Stay vigilant about potential threats and risk
  • Risk-based approach: Think critically and prioritise the things that will really impact the organisation and users. 

Ultimately, the best position you can be in is to be able to handle change quickly. If you’re in a position where you have established these DevSecOps practices then you’ll be in a position where that is easier. If not, the adaptation becomes more operational processes of shutting down services, preparing your service desk teams to take calls, and displaying informational landing pages for end users.

While it’s easy to state these principles, implementing them in practice is challenging and ultimately, there are no perfect solutions, only trade-offs. You need good people, aligned behind agreed security positions and incentivised to prioritise security to make informed trade-offs. At some point, they will need to decide when to sacrifice delivery speed, assume technical debt, or accept a security risk. 

Creating a customer-centric culture through DevSecOps is possible, but requires a careful balance of speed, security and reliability. Cybersecurity is a top tech interest for Australian businesses in 2024. If you want to learn more about how we can support your DevSecOps initiatives, contact our team in Australia today.

Hacking and cyber attacks are something which IT professionals have had on their radars for a long time. At Equal Experts we work hard to incorporate cybersecurity holistically into our data products, so heading off would-be hackers is something that’s part of our day job. However, a recent spate of security attacks involving a tiny USB pen-testing device has highlighted just how easy it is for just about anyone to cause chaos on any device, without being detected. 

First, I should clarify what a pen-testing device is. Penetration testing is an important step in application development and is used to identify potential security gaps that could leave you at risk from hackers. Pen-testing devices are used by cybersecurity experts to legitimately test their products for vulnerabilities before they go to market. Essentially, they simulate attacks to identify solutions that will defend against criminal hacking. 

The problem is, these devices can be used to destroy pretty much any computer, from laptops and mobile phones to complex hardware. The potential damage could range from annoying (as in this case of a victim who experienced interruption to his phone usage) to catastrophic for UK businesses and public services. Take the student who used a legitimate testing device to destroy 66 pieces of hardware, resulting in more than $58,000 of damage at his college. Or USBKill, which claims to be the ultimate pentesting device, with “unstoppable attack modes” that can permanently disable almost anything; they even have a video showing how to disable laptops, smart TVs and peripherals. 

Pentesting devices are easy to use, and they’re available for anyone to buy online. 

Think about that for a minute. A small USB device that has the potential to fry any computer, or to gain access to people’s personal information. Suddenly, James Bond behaviour feels accessible to all of us! Like knives, cigarettes, alcohol and passports, I wonder if there ought to be rules in place about who can have one? 

Experienced engineers (especially those who grow up in organisations like ours, where trust is implicit) will argue that restrictions get in the way of agile processes, and I agree. Mark Zuckerberg’s famous motto “Move fast and break things” is a rule that generally serves us in software development. But, where there’s a risk that the tool which protects us might be misused to cause damage, isn’t it worth tolerating a framework of regulations to mitigate this?

Operating within a framework is nothing new to most organisations, no matter how much inherent trust there is in the business. Some clients have highly regulated environments that necessitate checks and agreements before individual engineers can make decisions. Our work with HMRC and His Majesty’s Passport Office is a clear example. Sensitive data needs handling sensitively, and agreements have to be in place to ensure security is a priority. We know how to do this. 

Now, I’m not for a minute suggesting that pen-testing devices should be banned. Like a knife, there are valid, ethical uses, and they’re invaluable tools in the right circumstances – you wouldn’t slice steak with a spoon. Equally, firearms are acceptable for sports like clay pigeon shooting, but there are regulations in place to ensure their safe use. It’s the same with pen-testing devices – you wouldn’t release a data product to market without testing its security protocols, and the easier that testing is, the better. 

But the sale of knives is regulated; we check who we’re selling them to, and some people aren’t allowed to have them at all. Don’t you think, given the potential for criminal damage via these simple USB devices, similar rules should apply?

Updates and patching are close to my heart. Having spent over ten years in policing where you are often too late to the party, I’m a huge advocate of the philosophy that prevention is better than cure.  This mindset has followed me into technology so upgrades and patching are always high on my agenda.

A patching culture means starting from a position where application security patches and updates are part of the ethos. But all too often in application level management, people ask me why my team is doing a certain patch because:

  • It isn’t a security issue 
  • It isn’t a security concern that affects us 
  • It’s not a feature we use 

So why do we need patches and updates so often? The answer is operability. Upgrades underpin operability, and when we ignore patches or get them wrong, you can build lots of new features but the underlying security isn’t there. 

If your team has a patching culture then patching becomes an everyday part of the routine. Once you have that, everything falls into place. You can create AMIs and deploy quicker and safer and easier. 

Building a team culture where patches and upgrades are completed regularly and efficiently comes down to three things: drills, pain, and fast action. 

Patching Drills

If you speak to any emergency worker or current/ex military member, they will no doubt have a horror story or two about spending thousands of hours performing drills until they become muscle memory.  The value of doing the important things on a regular basis is well understood.  Drills ensure that team actions are simple, easy and effective, under any conditions.  It’s exactly the same for software!  Making upgrades a routine part of the daily work for all team members means that new engineers are embedded into this culture, and it just makes sense. 

If patching hurts, do it more often

This has to be my favourite of all the continuous delivery principles, and so relevant here.  The reason so many organisations shy away from upgrades and patching is because they can be a nightmare. But if we can make ourselves go through this pain regularly, the whole team will see the value of spending time to improve the process. It will eventually get to the point of not being a huge deal. But you can only get to that point by going through the pain and discomfort, first. 

Patches are better when they’re fast 

We know that small changes are better than big ones.  If your team has a strong patch culture and you’re regularly updating to the latest versions, then there are fewer changes to factor in and understand, when a critical incident or zero day comes in. 

If you are adopting a ‘drill’ mentality then you will have a familiar paved road without the added pressure of worrying about security issues because ‘this is what we do’. Additionally, having gone through the pain of getting to a patch culture means our pain has already been reduced, so we aren’t trying to respond to an incident quickly while surrounded by pain everywhere we turn.  

What does a patch culture mean? 

In summary, where you aren’t currently using AMI’s for your versioning and building on top at provisioning time, there are still things you can do, to be less like pets that are treated for illness and more like cattle that are replaced. 

A patch culture ensures you are working in the safest working environment for everyone. Start with patch often, patch often, patch often. 

Make it clear that upgrades and patches are important to you, your team, and your organisation. Love upgrades.  

Finally, as with all things agile, iterate and improve from where you are now.

Late in 2022, ChatGPT made its debut and dramatically changed the world. Within 5 days, it had reached 1 million users – an unprecedented adoption rate for any consumer application since the dawn of the Internet. Since then, many companies have started to think much bigger about what AI could mean for their customers and the wider world.

Here at Equal Experts we’ve seen a growing number of our customers evaluate how they can take advantage of this powerful technology. Business leaders are also rightly concerned about how to harness it safely and securely. In this article, we’ll explore some of the risks associated with adopting an LLM and discuss how these can be safely managed.

What is an LLM?

So what exactly is a large language model? Let’s see what ChatGPT has to say about that:

Follow that up with a question about what some common uses are for LLMs, and you’ll get a long list of suggestions ranging from content generation and customer support to professional document generation for medical, scientific and legal fields, as well as creative writing and art. The possibilities are immense!

Note: The above section is the only part of this article written with the assistance of an LLM!

When it comes to securing these kinds of systems, the good news is that we’re not starting from scratch. Decades of security research are still relevant, although there are some emerging security practices that you need to be aware of.

Tried and tested security practices are still important

Securing LLM architectures does not mean throwing out everything we know about security. The same principles we’ve been using to design secure systems are still relevant and equally important when building an LLM-based system.

As with many things, you have to get the basics right first – and we encourage you to start strong with security foundations such as least privilege access, secure network architecture, strong IAM controls, data protection, security monitoring, and secure software delivery practices such as those described in our Secure Delivery Playbook. Without these foundations in place, you risk building LLM castles on sand, and the cost and complexity in retrofitting foundational security controls is very high.

When it comes to LLM architectures, building in tried and tested security controls is critical, but not sufficient. LLMs bring with them some unique security challenges that need to be addressed. Security is contextual; there is no one-size-fits-all solution. So what do those LLM-specific security concerns look like?

New security practices are emerging

This is a new area of research that’s still undergoing a lot of change, but there are consistent themes across the industry around some of the major areas to focus on. You should see these as an LLM-specific security layer on top of the foundational security controls we described earlier. In many cases, these are not entirely new practices; instead, they are facets of security that we’ve been thinking about for a long time but are now being observed through a different lens.

Governance

Security governance models need to be updated to incorporate AI-specific concerns. This defines clear roles & responsibilities and sets expectations on everyone in the organisation, and provides a mechanism to help you to maintain suitable security standards over time. Simon Case, Head of Data at Equal Experts, has written a great article describing data governance.

Data governance and security governance need to work hand-in-hand. A strong AI security governance model should provide clear guidance to engineering teams in the kinds of security risks they need to protect against and security principles they need to follow, allowing them to adopt the best controls to meet those needs.

Some areas to consider when defining your governance approach include:

  • Model usage: What is the organisation’s policy on using SaaS models (e.g. OpenAI) vs self-hosted models? What are the data protection and regulatory implications of these decisions?
  • Data usage: Does the organisation permit AI-based solutions on PII or commercially-sensitive data? Do you allow use of customer data in LLM development?
  • Privacy and ethics: What use cases are acceptable for LLMs? Is automated decision making allowed, or is human oversight required? Can an LLM be exposed to customers or is it exclusively for internal use?
  • Agency and explainability: How do you ensure users can trust the decisions / outputs from AI systems? Are AI systems permitted to make decisions autonomously or is human intervention required?
  • Legal: What laws and regulations apply, and what internal engagement is needed with company legal teams when designing and building an LLM-based solution?
Delivery

The secure development practices that have become commonplace over the past few years remain applicable, but now need to be reexamined in light of new AI-specific threats. For example:

  • Secure by design: Do your security teams have the right knowledge of AI-based systems to guide engineering teams towards a secure design?
  • Training data security: Where is your training data sourced from? How can you validate the accuracy and provenance of that data? What controls do you have in place to protect it from leaks or poisoning attacks?
  • Model security: How are you protecting against malicious inputs, such as prompt injection? Where is your trained model stored, and who has access to it?
  • Supply chain security: What checks are in place to validate the components used in the system? How are you ensuring isolation and access control between components and data?
  • Testing: Are you conducting adversarial testing against your model? Have your penetration testers got the right skills and experience to assess LLM-based systems?
Operations

Monitoring an LLM-based system requires that you understand the new threats that come into play, including threats to the underlying data as well as the abuse of the LLM itself (e.g. leading to undesirable actions or output). Some areas to consider are:

  • Response accuracy: How do you ensure the model continues to produce accurate results? Can you detect model abuse that taints outputs? How do you detect and correct model drift?
  • Abuse detection: Can you identify abusive inputs to the system? Can you identify when model outputs are being used for harm? Do you have incident response plans to protect against these situations?
  • Pipeline security: Have you threat-modelled your delivery pipelines? Are they monitored as production systems?
  • AI awareness: Do your security operations teams understand AI systems sufficiently to monitor them? Have you updated your security processes to factor in changes with AI?

Do remember this is a very new field and the state of the art is changing all the time. It’s important to understand that the industry’s knowledge of the threats and countermeasures is evolving, so will need constant attention throughout the lifecycle of your LLM-based product. There are many useful guides and frameworks to draw on when defining your own organisation-specific approach to LLM adoption, such as Google’s Secure AI Framework. I would encourage you to invest time researching these when defining your own AI adoption plans.

How can I use LLMs safely within my business?

Given how new this technology is, we would recommend your first LLM-based project is focused on an internal use case. This allows you to get familiar with the technology in a safe environment before looking to adopt it for more ambitious goals. For example, we’ve seen teams evaluate LLMs to improve internal platform documentation, making it easier for product teams to onboard to the platform while reducing the support burden on the platform team. This particular example provides an excellent proving ground for LLMs because:

  • Users are all employees with a common interest in improving the system
  • There is no direct customer impact
  • Undesired LLM output can be monitored and feedback loops designed to improve and correct model behaviour
  • The model is trained on a well-defined set of high quality documentation, free from commercially sensitive data or PII
  • The system has no ability to take autonomous actions on other systems

This is a fast-moving area with constant improvements, but there are sound principles for ensuring secure adoption. 

Conclusion

Start with a strong foundation built on secure software engineering principles & practices, ensure you know exactly how you’re using LLMs and what data could be processed, and apply reasonable security controls for the new and emerging threats.

How can Equal Experts help?

LLMs are not a technology to fear or avoid. They’re a powerful new capability that should be pursued cautiously, with a well thought out plan that takes all the relevant security issues into account. The approach we encourage at Equal Experts is to conduct a discovery to identify the most valuable problems to be solved. Once you’ve tested your ideas and agreed on a particular problem to solve with an LLM, consider running an inception to align everyone on the team and de-risk delivery. Of course, security should be at the forefront of the conversation throughout this process. We have extensive experience in all of these areas and would love to help you get started in this new field. If you’re exploring ideas around LLMs, do get in touch.

Chaos Days are an opportunity to introduce disruption to your IT systems, so that you can understand how they will respond to possible ‘real’ disruptions. Of course, it’s also a highly effective way for teams to practice and improve how they respond to IT failures.

             Click to read the full playbook

In this article, we’ll discuss some of the most common questions about Chaos Days and how they can be used to improve IT service resilience. If you’d like to find out more about how to plan, organise and run your own Chaos Day, don’t miss our Chaos Days Playbook, which you can download for free.  

Q: What is a Chaos Day? 

A Chaos Day is an event that runs over one or more days where teams can explore how their service responds to failures safely. During a Chaos Day, teams design and run controlled experiments in pre-production or production environments. Each experiment injects a failure into the service, such as terminating a compute instance, or filling up a storage device) and the team observes and analyses the impact and overall system response, and the response of the supporting team. Chaos Days are a practice within the field of chaos engineering.

Q: What is chaos engineering? 

Chaos engineering is defined as the discipline of experimenting on a system to build confidence in the system’s capability to withstand turbulent conditions in production. Chaos engineering teams design and run experiments that inject failures into a target system, so the team can learn how the system responds. This learning improves the resilience of the system by:

  1. Equipping the team with deeper understanding about system behaviour
  2. Informing the team on where to invest in order to improve system resilience

Q: Why do we organise Chaos Days? 

Chaos Days provide a focal event for your team to practice chaos engineering. They are especially useful to teams that might be less familiar with this discipline, because they introduce chaos engineering in a structured, boundaried manner. 

Chaos Days improve system resilience by helping your people learn about systems, and gain experience in how to diagnose and solve problems in high-stress situations. They provide an opportunity to improve processes such as incident management, incident analysis and engineering approaches, such as how faults should be handled and how resilience testing is performed during feature development. 

Finally, Chaos Days help organisations to initiate changes that make services more resilient, improve observability and make services and dependencies better understood. 

Q: How do you implement chaos engineering? 

Chaos engineering is a broad and deep discipline, to which our Chaos Day playbook provides a great introduction, including a 5-minute guide to running a Chaos Day. Once you’ve digested that, the simplest next steps are to:

  1. Decide which part of your system you want to learn more about.
  2. Come up with a hypothesis for how that part responds to specific failures.
  3. Design and run an experiment to test that hypothesis, by injecting a failure into that part of the system. The failure injection can be manual (e.g. stop a service the system depends on) or automated (e.g. use infrastructure-as-code to remove access to the service for the duration of the experiment). 
  4. Observe how the system responds to the failure and review as a team what was learnt from this experiment and any changes you should make as a result of it.
  5. Rinse and repeat.

Q: What are some top tips for Chaos Days? 

  1. Start small, with one or two teams and a few experiments, not tens of teams and tens of experiments. This allows you to adapt and learn how to run a Chaos Day in your specific context, before scaling out to multiple teams and many experiments.
  2. Plan ahead – it’s possible to run a mini chaos event in a single day, but you’ll get the most from any chaos event by scheduling time in advance to design and run experiments, then reflect and share the lessons extracted from them.
  3. Spread knowledge by involving the whole team, but limiting how much diagnosis and repair your most experienced engineers do – either treat them as absent for that day or pair them with less experienced team members.
  4. Be conscious of business critical events that the chaos might impact (especially if it gets out of control). Also, allow time to return the system to its normal state. You don’t want to take down a key environment just when it’s needed for a critical release. 

Q: What tools are available for running a Chaos Day? How should we run a Chaos Day if we’re running AWS? 

The experiments you run during a Chaos Day typically modify system configuration or behaviour in some way that simulates a failure (e.g. shutting down a compute instance, closing a network connection). These modifications can either be done manually (e.g. through temporarily editing configuration) or in a more automated manner via tooling such as infrastructure-as-code (IaC), Chaos Monkey, AWS Fault Injection Simulator or Gremlin. If you want to repeat experiments or track them via source-control, then the tooling approach is preferable, as it codifies the experiment and automates its injection and rollback. 

Q: How to set up a chaos engineering day? 

Download our Chaos Day playbook in pdf if you prefer

That’s simple – just follow our playbook!

 

In our recent Operationalising ML Playbook we discussed the most common pitfalls during MLOps. One of the most common pitfalls? Failing to implement appropriate secure development at each stage of MLOps. 

Our Secure Development playbook describes the practices we know are important for secure development and operations and these should be applied to your ML development and operations.

In this blog we will explore some of the security risks and issues that are specific to MLOps. Make sure you check them all before publishing your model into production. 

In machine learning, systems use example data to try to learn something – which may be output as a prediction or insight. The examples used to train ML models are known as training datasets, and security issues can be broadly divided into those affecting the model before and during training, and those affecting models that have already been trained. 

Vulnerability to data poisoning or manipulation  

One of the most commonly discussed security issues in MLOps is data poisoning – this is an attack where hackers attempt to corrupt or manipulate the data used for training ML models. This might be by switching expected responses, or adding new responses into a system. The result of data poisoning is that data confidentiality and reliability are both damaged.  

When data for ML models is collected from online sources from sensors or online sources, the risk of data poisoning can be extremely high. Attacks can include label flipping (data is poisoned by changing labels in data) and gradient descent attacks (where the ability of a model to understand how close it is to predicting the correct answer is damaged by either making the model falsely believe it’s found the answer, or by preventing it from finding the answer by constantly changing that answer). 

Exposure of data in the pipeline

You will certainly need to include data pipelines as part of your solution. In some cases they may use personal data in the training. Of course these should be protected to the same standards as you would in any other development. Ensuring the privacy and confidentiality of data in machine learning models is critical to protect against data extraction attacks and function extraction attacks. 

Making the model accessible to the whole internet

Making your model endpoint publicly accessible may expose unintended inferences or prediction metadata that you would rather keep private. Even if your predictions are safe for public exposure, making your endpoint anonymously accessible may present cost management issues. A machine learning model endpoint can be secured using the same mechanisms as any other online service.

Embedding API Keys in mobile apps 

A  mobile application may need specific credentials to  directly access your model endpoint. Embedding these credentials in your app allows them to be extracted by third parties and used for other purposes. Securing your model endpoint behind your app backend can prevent uncontrolled access.

As with most things in development, it only takes one person to neglect MLOps security to compromise the entire project. We advise organisations to create a clear and consistent set of governance rules that protect data confidentiality and reliability at every stage of an ML pipeline. 

Everyone in the team needs to agree on the right way to do things – it only takes one leak or data attack for the overall performance of a model to be compromised.