Vibe coding continues to gain traction in Silicon Valley, and former Monzo CEO Tom Blomfield has thoughts on how to maximize its potential.
Coined just two months ago by Andrej Karpathy, an OpenAI cofounder, the term refers to people using AI to write code by giving it text-based instructions.
Experienced engineers are using it to save time, and those with nontechnical backgrounds are coding everything from dating apps to games.
Blomfield, who’s now a group partner at Y Combinator, shared some tips for people looking to optimize the way they vibe code, in a video posted by the accelerator on Friday. Here are three pieces of advice he gave.
Pick the right tool and create a comprehensive plan
Blomfield advised users to plan ahead and experiment to find the tool that best supports their skill level and desired end product.
He found that tools like Lovable and Replit were suited for beginners, whereas more experienced coders could use Windsurf or Cursor.
“Work with the LLM to create a comprehensive plan,” he said in the video, referring to large language models. “Put that in a markdown file inside your project folder and keep referring back to it.”
He suggested that users could use the LLM to carry out the plan section by section, instead of making the product in one go.
“This advice could change in one or two months, as the models are getting better,” he added.
Do version tests on the product
Blomfield said that when he prompted AI tools multiple times for the same coding task, he would get bad results as a result of the model accumulating “layers of bad code.”
He advised using the large language model to write tests that simulate someone clicking through a version of the site or app, to gauge how well the features are working.
Sometimes, LLMs can make unnecessary changes to these features, he said, and implementing integration tests can pick up on these changes quicker.
Write instructions for the LLMs
Blomfield said he found that different models succeeded where others failed. If a user encounters a specific bug, it’s helpful to reset all changes and give the LLM detailed instructions to fix it on a clean code base.
“Logging is your friend,” Blomfield said.
Another tip he offered was to use small files and a more modular, service-based architecture, where the LLM has clear API boundaries.
An upside of this is that it would avoid creating a huge single repository of code for various projects, which could be more complex to manage and have more integration challenges.
Read the full article here