Blog_Lead (15)
Stuart B
Stuart Brown Technical Architect

Data, Gen AI, Our Thinking, Tech Focus Mon 12th February, 2024

Helping ease the path to AI adoption in the enterprise

What do you do when decision-makers think that AI gets more wrong than it does right?

I’m currently working with a customer in the education sector, replacing an internally hosted system with a cloud-based platform.

Creating a new platform is a big job, with one in-house engineer, two Equal Experts engineers and two third-party engineers all working on the new platform. As you’d expect, there’s a fair amount of code.

The customer uses a specific, free IDE called Visual Studio Code, with React as a front-end development framework. When we create new pages, the platform uses Gatsby and pulls in content from a headless CMS, Kontent.ai. Then, we can type code in VS Code to generate the page.

We’ve been using VS Code and Copilot to speed up the creation of new code within the IDE. Copilot uses AI to make auto-complete suggestions as the developer is writing code. It takes knowledge from billions of lines of code written by thousands of developers to make an informed prediction about what you’re writing. As you type, Copilot offers a suggestion for the next lines of code and if it’s correct – you just press the tab, and it will implement that code.

The huge advantage is that it makes the creation of large quantities of code much faster. In general, I can create in a couple of hours what used to take a full day. What Copilot gives you is working code that doesn’t have any typos, as you’d commonly see in human-generated code.

However, we met with some resistance from the customer to the idea of using Copilot, particularly around accuracy. The client’s team had heard more about what AI gets wrong than what it gets correct, and we faced a lot of skepticism.

Our approach with this engagement was to do the following.

Sharing research on Copilot’s impact on efficiency

We shared research with the customer that has been carried out by GitHub, looking at how Copilot affects the efficiency and quality of code development.

In Github’s study, one group of developers created JavaScript code traditionally, while the other group used AI. The results were compared for accuracy and correctness, and revealed that the group writing code manually took on average 161 minutes to complete the task, compared to 71 minutes when using Copilot. In other words, it took 130% longer to complete the task when not using Copilot.

Sharing personal experiences of Copilot

Faced with skepticism about AI, it’s important to help clients understand that while tools like Copilot might sometimes suggest something that’s wrong, 95% of the time, it’s going to suggest exactly what you would have typed.

Also, you’re not required to go along with Copilot if it makes a suggestion that isn’t correct. You can ignore it, or provide more context to get a better suggestion.

Faced with skepticism about AI, it’s important to help clients understand that while tools like Copilot might sometimes suggest something that’s wrong, 95% of the time, it’s going to suggest exactly what you would have typed.

Github says that Copilot makes developers 55% more effective but I think that’s a conservative estimate. I’m an architect and I don’t write code day in, day out, but I’ve found it to be invaluable. I can get through a day’s work in probably a couple of hours. It’s a highly effective code completion tool and it’s context aware, and knows what you’ve written before, like the structure of the project and the code base.

Starting early in the procurement process

One of the big challenges we had to overcome with Copilot was our customer’s procurement process. Although Copilot only costs $20 per month, and there are only six developers, it took us 120 days to go through the formal procurement process for us to be able to use the tool.

We found the procurement process almost as big a challenge as convincing the client to use the tool in the first place.

Be an evangelist

In many cases people are still learning about how AI can be helpful in the development process, so sharing examples of how it relates to the project is important.

I’ve found that Copilot is really good when you’re writing code that has a branch in it – that sounds against best practice but it picks up those branches and so if you start writing code for something on a Monday, it picks up that you might want the same for Tuesday and Wednesday and so on.

Recently there’s been a feature released where you can chat with the code and that’s helpful if you are working with code that you’re not familiar with; you can ask it to explain what it does, and you can even ask it to read the code and tell you if you could potentially do it better.

Finally, I’ve found that Copilot really excels when you’re picking up a new language. Most of the time you’re working in a language you understand, but Copilot can explain itself which is really helpful when you’re learning new languages.