The Importance of Secure Development in MLOps

In our recent Operationalising ML Playbook we discussed the most common pitfalls during MLOps. One of the most common pitfalls? Failing to implement appropriate secure development at each stage of MLOps. 

Our Secure Development playbook describes the practices we know are important for secure development and operations and these should be applied to your ML development and operations.

In this blog we will explore some of the security risks and issues that are specific to MLOps. Make sure you check them all before publishing your model into production. 

In machine learning, systems use example data to try to learn something – which may be output as a prediction or insight. The examples used to train ML models are known as training datasets, and security issues can be broadly divided into those affecting the model before and during training, and those affecting models that have already been trained. 

Vulnerability to data poisoning or manipulation  

One of the most commonly discussed security issues in MLOps is data poisoning – this is an attack where hackers attempt to corrupt or manipulate the data used for training ML models. This might be by switching expected responses, or adding new responses into a system. The result of data poisoning is that data confidentiality and reliability are both damaged.  

When data for ML models is collected from online sources from sensors or online sources, the risk of data poisoning can be extremely high. Attacks can include label flipping (data is poisoned by changing labels in data) and gradient descent attacks (where the ability of a model to understand how close it is to predicting the correct answer is damaged by either making the model falsely believe it’s found the answer, or by preventing it from finding the answer by constantly changing that answer). 

Exposure of data in the pipeline

You will certainly need to include data pipelines as part of your solution. In some cases they may use personal data in the training. Of course these should be protected to the same standards as you would in any other development. Ensuring the privacy and confidentiality of data in machine learning models is critical to protect against data extraction attacks and function extraction attacks. 

Making the model accessible to the whole internet

Making your model endpoint publicly accessible may expose unintended inferences or prediction metadata that you would rather keep private. Even if your predictions are safe for public exposure, making your endpoint anonymously accessible may present cost management issues. A machine learning model endpoint can be secured using the same mechanisms as any other online service.

Embedding API Keys in mobile apps 

A  mobile application may need specific credentials to  directly access your model endpoint. Embedding these credentials in your app allows them to be extracted by third parties and used for other purposes. Securing your model endpoint behind your app backend can prevent uncontrolled access.

As with most things in development, it only takes one person to neglect MLOps security to compromise the entire project. We advise organisations to create a clear and consistent set of governance rules that protect data confidentiality and reliability at every stage of an ML pipeline. 

Everyone in the team needs to agree on the right way to do things – it only takes one leak or data attack for the overall performance of a model to be compromised. 

 

Despite huge adoption of AI and machine learning (ML), many organisations are still struggling to get ML models into production at scale.

The result is AI projects that stall, don’t deliver ROI for years, and potentially fail altogether. Gartner Group estimates that only half of ML models ever make it out of trials into production. 

Why is this happening? One of the biggest issues is that companies develop successful ML prototype models, but these models aren’t equipped to be deployed at scale into a complex enterprise IT infrastructure. 

All of this slows down AI development. Software company Algorithmia recently reported that most companies spend between one and three months deploying a new ML model, while one in five companies took more than three months. Additionally, 38% of data scientists’ time is typically spent on deployment rather than developing new models. 

Algorithmia found that these delays were often due to unforeseen operational issues. Organisations are deploying models only to find they lack vital functionality, don’t meet governance or security requirements, or need modification to provide appropriate tracking and reporting. 

How MLOps can help 

Enter MLOps. While MLOps leverages DevOps’ focus on compliance, security, and management of IT resources, MLOps add much more emphasis on the consistent development, deployment, and scalability of models. 

Organisations can accelerate AI adoption and solve some of their AI challenges by adopting MLOps. Algorithmia found that where organisations were using MLOps, data scientists were able to reduce the time spent on model deployment by 22%, and the average time taken to put a trained model into production fell by 31%. 

That’s because MLOps provides a standard template for ML model development and deployment, along with a clear history and version control. This means processes don’t need to be reinvented for each new model, and standardised processes can be created to specify how all models should meet key functional requirements, along with privacy, security and governance policies. 

With MLOps, data teams can be confident that new code and models will meet architecture and API requirements for production usage and testing. By removing the need to create essential features or code from scratch, new models are faster to build, test, train and deploy. 

MLOps is being widely used for tasks such as automation of ML pipelines, monitoring, lifecycle management and governance. MLOps can be used to monitor models to sense any fall in performance or data drifts that suggest models might need to be updated or retrained.

Having a consistent view of ML models throughout the lifecycle in turns allows teams to easily see which models are live, which are in development, and which require maintenance or updates. These can be scheduled more easily with a clear overview of the ML landscape. 

Within MLOps, organisations can also build feature stores, where code and data can be re-used from prior work, further speeding up the development and deployment of new models. 

Learn more about MLOps 

Our new playbook, Operationalising Machine Learning, provides guidance on how to create a consistent approach to monitoring and auditing ML models. Creating a single approach to these tasks allows organisations to create dashboards that provide a single view of all models in development and production, with automated alerts in case of issues such as data drift or unexpected performance issues.

If you’re struggling to realise the full potential of machine learning in your organisation, the good news is that you’re not alone. According to industry analysts VentureBeat, 87% of AI projects will never make it into production.

MLOps emerged to address this widespread challenge. By blending AI and DevOps practices, MLOps promised smooth, scalable development of ML applications.

The bad news is that MLOps isn’t an immediate fix for all AI projects. Operationalsing any AI or machine learning solution  will present its own challenges, which must be addressed to realise the potential these technologies offer. Below we’ve outlined five of the biggest MLOps challenges in 2022, and some guidance on solving these issues in your organisation.

You can read about these ideas in more detail in our new MLOps playbook, “Operationalising Machine Learning”, which provides comprehensive guidance for operations and AI teams in adopting best practice around MLOps. 

Challenge 1: Lack of user engagement

Failing to help end users understand how a machine learning model works or what algorithm is providing an insight is a common pitfall. After all, this is a complex subject, requiring time and expertise to understand. If users don’t understand a model, they are less likely to trust it, and to engage with the insights it provides.

Organisations can avoid this problem by engaging with users early in the process, by asking what problem they need the model to solve. Demonstrate and explain model results to users regularly and allow users to provide feedback during iteration of the model. Later in the process, it may be helpful to allow end users to view monitoring/performance data so that you can build trust in new models. If end users trust ML models, they are likely to engage with them, and to feel a sense of ownership and involvement in that process.

Challenge 2: Relying on notebooks

Like many people we have a love/hate relationship with notebooks such as Jupyter. Notebooks can be invaluable when you are creating visualisations and pivoting between modelling approaches.

However, notebooks contain both code and outputs, along with important business and personal data, meaning it’s easy to inadvertently pass data to where it shouldn’t be. Notebooks don’t lend themselves easily to testing, and cells that can run out of order means that different results can be created by the same notebook based on the order that cells are run in.

In most cases, we recommend moving to standard modular code after creating an initial prototype, rather than using notebooks. This results in a model that is more testable and easier to move into production, with the added benefit of speeding up algorithm development.

Challenge 3: Poor security practice 

There are a number of common security pitfalls in MLOps that should be avoided, and it’s important that organisations have appropriate practices in place to ensure secure development protocols.

For example, it’s surprisingly common for model endpoints and data pipelines to be publicly accessible, potentially exposing sensitive metadata to third parties. Endpoints must be secured to the same standard as any development to avoid cost management and security problems caused by uncontrolled access.

Challenge 4:  Using Machine Learning inappropriately

Despite the hype, ML shouldn’t always be the default way to solve a problem. AI and ML are essentially tools that help to understand complex problems like natural language processing and machine vision.

Applying AI to real-world problems that aren’t like this is unnecessary, and leads to too much complexity, unpredictably and increased costs. You could build an AI model to predict whether a number is even or odd – but you shouldn’t.

When addressing a new problem, we advise businesses to try a non-ML solution first. In many cases, a simple, rule-based system will be sufficient.

Challenge 5: Forgetting the downstream application of a new model   

Achieving ROI from machine learning requires the ML model to be integrated into business systems, with due attention to usability, security and performance.

This process becomes even longer if models are not technically compatible with business systems, or do not deliver the expected level of accuracy. These issues must be considered at the start of the ML process, to avoid delays and disappointment.

A common ML model might be used to predict ‘propensity to buy’ – identifying internet users who are likely to buy a product. If this downstream application isn’t considered when the model is built, there is no guarantee that the data output will be in a form that can be used by the business API. A great way to avoid this is by creating a walking skeleton or steel thread (see our Playbook for advice on how to do this).

Find out more about these challenges and more in our new Operationalising Machine Learning Playbook, which is available to read here.

Building a predictive model to forecast the future from historical data is standard practice for today’s businesses. But deploying, scaling and managing these models is far from simple.

Each ML solution depends on an algorithm (code) and a set of data used to develop and train the algorithm. For this reason, building ML solutions is different to other types of software development. 

Enter MLOps, or machine learning operations, a set of processes that help organisations to develop, deploy and monitor ML models at scale by applying best practices to infrastructure, code and data. 

MLOps is a relatively new idea but one that has been adopted by many organisations – the market for MLOps solutions is expected to reach $4 billion by 2025. At Equal Experts, we have been involved in developing and deploying AI and ML for a number of applications including to: 

  • Assess cyber risk 
  • Evaluate financial risk 
  • Improve search recommendations for retail websites 
  • Improve logistics and supply chains 

Key Terms used in MLOps

If you’re new to MLOps there are several important terms to be aware of:

  • Machine learning (ML) – a subset of AI that involves training algorithms with data rather than developing hand-crafted algorithms. A machine learning solution uses  a data set to train an algorithm, typically training a classifier that says what type of thing  this data is (e.g. this picture is of a dog ); a regressor, which estimates  a value (e.g. the price of this house is £400,000.) or an unsupervised  model, such as generative ones  which can be used to write novel text (such as song lyrics).  
  • Model – In machine learning a model is the result of training an algorithm with data, which maps a defined set of  inputs to outputs.  
  • Algorithm – we use this term more or less interchangeably with model. (There are some subtle differences, but they’re not important and using the term ‘algorithm’ prevents confusion with the standard software engineering use of the term ‘data model’ – which is a definition of the data entities, fields, relationships etc  for a given domain, that is used to define database structures among other things.)
  • Ground-truth data – a machine-learning solution usually needs a data set that contains the input data (e.g. pictures) along with the associated answers (e.g. this picture is of a dog, this one is of a cat)  – this is the  ‘ground-truth’.
  • Labelled data – means the same as ground-truth data. 

How does MLOps work? 

We talk about MLOps as a set of processes that help data scientists to develop consistent, scalable ML models, and monitor their performance. To create and use these algorithms, you will usually follow these steps: 

Initial development of the algorithm – Developing a model is the first step in machine learning. Data scientists will identify or create ‘ground truth’ data sets and explore them. They will build and evaluate prototypes of the models, trying out different core algorithms and data transformations  until they arrive at  one which meets the business need.

Integrate/deploy the model – once the model has been built, it must be integrated into the business. This can be done in various ways depending on the consuming service. In modern architecture, models are commonly implemented as a standalone microservice and models are deployed by copying an approved version of the model into an operational environment. 

Monitor performance – All ML models need to be monitored to ensure they’re running and meeting demand, but also that the results of the model are accurate and reliable.

Update model – over time, models must be retrained to reflect new data, or improvements to the model. In this case, it’s important to maintain version control and to direct downstream services to the new model.  

Operationalising Machine Learning 

Our MLOps playbook, brings together our experiences working with algorithm developers to build ML solutions. It provides a comprehensive overview of what you need to consider when providing the architecture, tools and infrastructure to support data scientists and to integrate their outputs into the business.

Download the playbook for expert guidance on how your organisation can attain  the promised business value from algorithms by providing engineering to support algorithm development, and by integrating ML more effectively into your business processes. You’ll find helpful advice on how to:

  • Collect data that drives machine learning, and make that available to data scientists 
  • Integrate algorithms into your everyday business 
  • Configuration control, deploy and monitor deployed algorithms 
  • Test and monitor the algorithms  

View our online version or download a pdf here.