Vibe Coding

Alright, I know I’m pretty late to the party on this one, but I wanted to have have some meat on this post, and so far AI-assisted programming has been anything but that. Over the last year I’ve used it for minor scripting, pull-request reviews, and other minor tasks. The hilarious thing is that as I write this post, Copilot auto-complete keeps suggesting whole paragraphs of text rambling on about how AI is a “game-changer” and “revolutionizing the way we code”. Fortunately for us humans, it’s a little more nuanced than that.
First, a little about my setup. Our work has a corporate Copilot license for use by the various R&D teams. I’ve configured the Copilot plugin for VS Code, Android Studio, and XCode. When the plugins first came out about a year ago, they were just a client wrapper for the chat terminal that you would normally interact with on the web. Since that initial offering they’ve matured, with “Ask”, “Agent”, and “Plan” modes. “Ask” is the basically the functionality that they had a year ago. Ask a question, get an answer. “Agent” on the other hand has the ability to make changes to your code base, automatically implementing features and fixing bugs. “Plan” bridges the gap between “Ask” and “Agent.” It uses read-only tools to read your codebase, identify necessary changes, and produce a detailed, ordered set of atomic steps. Unlike “Ask” it accepts a broader context and can refine its actions iteratively. Unlike “Agent”, “Plan” does not take any actions on its own. Instead it will write a plan, either as a separate markdown file or within the conversation. This plan can be completed later either by you or an agent. Perhaps its biggest shortcoming is that it cannot leverage MCP (Model Context Protocol) integrations.
MCPs are the newest agentic innovation to come out. They allow the model to access additional context through other applications that implement the MCP protocol. For example, if you have a calendar application that implements MCP, the model can access your calendar data to schedule meetings or set reminders. The Jira, GitHub, and Figma desktop clients all now offer MCP integrations. This means that the model can access your Jira tickets, GitHub pull requests, and Figma designs to inform its responses.
I recently leveraged this by integrating the Jira and Figma MCPs into my workflow. I asked the model to review a Jira ticket as well as its associated Figma design. The model was able to access the Jira ticket to understand the requirements and acceptance criteria, and then accessed the Figma design to understand the visual requirements. Over the course of about half an hour the model implemented a rough draft of the changes necessary. It used our design system to implement the UI elements, and then implemented the necessary logic to make the feature work.
There were quite a few misses though. The code didn’t account for the need to scroll if a user had a long sharer list. It also hallucinated text that wasn’t actually present in the Figma design. It failed to implement the @SerialName annotations that our other DTOs (Data Transfer Objects) had, which would have inevitably caused crashes for users upgrading their app version. Funnily enough it generated other issues that slipped past me, but that the Copilot automatic pull request reviewer did catch. For thirty minutes of working with the model, I generated about three hours worth of cleanup that I had to do after the fact.
Ultimately, I think the biggest value that I got out of this was the ability to quickly generate a rough draft of the feature. It was able to take the requirements and design and turn it into code about twice as fast as I might have. However, the quality of the code was not great, and it required a lot of cleanup to get it to a production-ready state. Going forward I plan to continue to try and use AI-assisted programming as much as I can, but its a long way off from enabling me to vibe code my way through my day-to-day work.
Photo by Luke Jones on Unsplash
Comments