Skip to main content

Using LLMs to Write Code - What Did I Learn?

· 4 min read
Sanjeev Sarda
High Performance Developer

What did I learn from trying to use LLMs to code?

Off I go into the woods at night on my llama wearing my wizard hat

Context is important

Models have a-priori knowledge that may conflict with what you want them to do. To detect and counter this it's useful to validate the model's knowledge in steps.

Break down tasks and validate them piece by piece rather than going for a one shot approach if you want to realize value quicker.

Fast but not fast enough

Using an LLM for coding takes time - it's fast, but not fast enough. I end up waiting between responses as I now know it will take some time to get an output, but not quite long enough that I can be doing other things or risk a context switch.

As I started using LLMs more and more I got used too this, issuing chats and commands in a more async fashion and getting on with other things at the same time.

Changes Your Job

Your job changes to one of coaxing and enticing the LLM to do your bidding, often when you could do the same thing faster if you're familiar with a language. Partly because it feels cool and partly because this is an experiment for me.

I'd be more worried about those leaves on your desk than an LLM taking your job mate! A worried guy

You feel like you're just chatting and not working at times, except when it isn't doing what you want it to do. You start to feel like the IP (intellectual property) is no longer in the code you're writing, but the prompts you're using and your process.

Mental Load and Coding Velocity

You feel like you have less mental load vs. conventional development, but it makes you feel lazy - the effort of copy and pasting code starts to become too much. I found myself tuning out at points and going on auto-pilot which is dangerous when you're writing something like a protocol handler (financial, regulatory, reputational impact) where you need attention to detail.

Nero fiddling

I ended up fixing the code produced myself when needed because it was close enough for my needs. I felt like I was a development manager, evaluating developer code and then doing the check in, except I was the one responsible for fixing their mistakes instead of bouncing it back too them. This is also partly a tooling issue.

I don't think my coding velocity increased in a language that I was familar with, but in other languages that I don't use every day or am not familiar with it was definetly a boost. Again, with the right tools I am sure my velocity would also increase in a language I am familiar with.

Learning and development

I found myself on several ocassions reaching for my phone to chat to a model over OpenWebUI about something rather than googling it. I also found myself asking it whether it knew about particular technologies, libraries and frameworks and for basic examples in those. Did I try and get it to code a code generator? You know it ;-)

Towards better tooling

Despite their shortcomings, LLMs have got me excited, there's still lots of ways they can be used by developers which remains unexplored as well as their use as a learning tool. The pace of development with the commercially developed models is also crazy fast, so I expect to see a lot of improvements in the near future.

I tried out OpenDevin but frankly my laptop can not handle it and I gave up trying, as I can realise value for myself by using a more incremental and human driven process.

On the commercial side I've also tried out copilot. For unit testing specifically, I'd recommend taking a look at Codium which also shows a lot of promise.

My own next steps with this technology is probably custom tool and process development - I don't think you can maximise the benefit of LLMs without re-engineering your process to take advantage.

Stay tuned.