Blog_Lead (17)
NM
Neil Murray Data Engineer

AI, Data Tue 20th February, 2024

How Copilot and other LLMs transformed my coding process

We know that LLMs can help with software development, but which tools can actually save time, and how? 

As a software engineer, I was curious to try Github Copilot and similar more code-specific tools to see if they could help me produce quality code more quickly. I’ve been using several “general” chat based-LLMs such as ChatGPT and Anthropic for eight months now, and I really love them.

In this post, I want to share some of the ways these tools have improved my working practices (and some ways they haven’t). 

Useful applications of LLM tools

Simple commands

Copilot is great at auto-completing those simple commands that I don’t use often. For example, I wanted to get the size of a file in Python. I know that it is ‘os.path…’, then something about ‘size’. Copilot successfully auto-completes it to ‘os.path.getsize’. It seems silly, but these single line completions that would otherwise be a Google search – however quick – are still a distraction and break the flow.

Doing things in unfamiliar languages

I wanted to create a simple script in AppleScript that allowed me to be able to click a Finder icon and have a new empty file automatically created. I have no experience in AppleScript, and just the thought of having to spend hours searching for examples and explanations would have completely put me off. But with ChatGPT, all I had to do was say “I want an AppleScript that when I right-click I get a new file”, and then follow up with “How do I deploy this?” I was able to get a new app icon in Finder in just 20 minutes that did exactly what I wanted. I literally couldn’t have done this before.

Resolving difficult error messages

We’re all familiar with development problems where our code doesn’t work and you just can’t see what the problem is. It’s the smallest error, and you’ve just been staring at it for too long. You end up painstakingly going through the code, line by line, token by token – only to find it was something very simple, like using the wrong sort of quote or import statement order. If I even suspect I have this type of problem, I just dump the entire file in ChatGPT, along with the error message, and it almost always quickly shows me where I went wrong.

Things that didn’t work so well

Creating whole function bodies

I do find that I have almost cognitive dissonance when Copilot creates an entire function body for me. Before even naming the function, I would have mentally planned out what I want my function to do, and when Copilot creates the body for me, I now have to try and figure out what’s happening, and map it to what I would have done. I find that a bit confusing.

Unit tests

Lots of people find LLMs such as CodiumAI great for generating unit tests, but my own experience has been that if you need to test multiple scenarios that have a pattern with a subtle difference, the LLM can run ahead of itself generating hundreds of meaningless characters (such as test_case01, test_case02, test_case03, etc.) So, one needs care here.

You need to pick a tool

Most IDEs (integrated development environments) allow only one LLM, so it’s difficult to code a single project and compare each LLM, which is unfortunate.

Tips for making the most of LLMs

Prompts matter

Creating the best possible prompt for an AI tool like Copilot is more of an art than a science. Don’t be afraid to try things out, and learn what works best for different challenges. Clear function names and descriptions are not just good coding practices, they’re also good prompts.

Ask the LLM for help

This is purely anecdotal, but I’m convinced that if you’re nice to the LLM, it’ll be nice back to you. So saying , ‘please help me’ works better than just asking the tool to ‘write this code’. Try to imagine that the LLM works with you, rather than for you.

LLMs do sometimes get it wrong

LLMs can be prone to hallucinations. The code may look right, but really isn’t. If you’re working in a language you know, it’s hopefully easy to spot the error, and you can prompt the LLM to fix this. Regardless, and especially if doing something in a language you are unfamiliar with (like AppleScript), simply running the generated code gives you an error message that you can just feed right back into the same prompt thread and the LLM will often fix it for you.

Pre-prompt with useful context

I’ve found that LLM tools need plenty of useful context. Make sure you complete the “custom instructions”, and don’t be afraid to experiment if they don’t work, because it really matters. For example, ChatGPT begins every response to me with the context that “I am a developer and data engineer, working with SQL and Python” and that “When answering, assume I have a python environment with necessary packages installed. When explaining code, always try to give examples.

I haven’t formally measured the impact of LLM tools on my work, but I definitely feel that these tools have reduced lots of points of friction in the development process, making it easier and faster for me to produce better quality code.