Gemini Conductor: Google’s New Tool for Vibe Coding

🔗 Link do vídeo: https://www.youtube.com/watch?v=PB9sJnZyQ7g
🆔 ID do vídeo: PB9sJnZyQ7g

📅 Publicado em: 2025-12-29T15:57:05Z
📺 Canal: AI LABS

⏱️ Duração (ISO): PT8M14S
⏱️ Duração formatada: 00:08:14

📊 Estatísticas:
– Views: 5.806
– Likes: 123
– Comentários: 10

🏷️ Tags:

Google AI just dropped gemini conductor, a new context engineering workflow for Gemini CLI. But does it beat what's already out there? I tested gemini cli conductor against coding workflows like BMAD and Claude-based setups to see if this code generation system is worth your time. Here's my honest take.

In this video, I'm breaking down Google's new Conductor extension for Gemini CLI — their latest attempt at building a structured context engineering workflow for AI-assisted development.
Conductor works as an installable extension that gives you a set of slash commands to manage your entire development process. The idea is that you run a setup command, define your project, and then the system generates planning documents, tracks, specs, and implementation files that guide the AI through building your application step by step.
The workflow uses a track-based system where each feature or component gets its own folder containing a plan file and a spec file. The AI works through these tracks one by one, theoretically maintaining context and following a structured path from concept to completion.
On paper, it sounds solid. It includes automatic test coverage requirements, Git integration with notes for tracking changes, code style guides for different languages, and a revert command that can undo agent mistakes using Git awareness.
But here's where things fall apart.
During my testing, the system repeatedly failed at basic context management. When defining the tech stack for a scalable production app, it missed critical components that I had to manually correct. The initial track it generated was far too broad — cramming way too many tasks into a single implementation cycle, which is a recipe for compounding errors.
The database schema it generated was incomplete, missing fields and relationships that were clearly implied by the project requirements I provided. I had to continuously guide and correct it through decisions that should have been obvious from the context.
The biggest red flag came when I asked it to switch from NPM to PNPM mid-setup. Instead of making a targeted change, it attempted a backup process that somehow deleted the entire Conductor folder — all the planning files, specs, and tracks gone. It then tried to reconstruct everything from memory, which defeats the entire purpose of having persistent context files.
Even during normal implementation, it marked tasks as complete when they clearly weren't. It put dummy API keys in environment variables and tried to push database schemas without ever asking me to set up the actual Supabase project or provide real credentials.
The core issue seems to be in how the command files and workflow instructions are written. Gemini is a capable model, so the problems are almost certainly coming from poor prompt engineering in the extension itself. The context loop isn't properly managed, changes aren't being tracked correctly, and the system doesn't know how to handle modifications without blowing up existing work.
Compare this to something like BMAD, which handles context changes by only updating the relevant files rather than nuking everything and starting over. That's what proper context engineering looks like — surgical updates, not scorched earth rebuilds.
For now, I wouldn't recommend Conductor for any serious end-to-end development work. If you need a structured AI coding workflow, BMAD remains the better option. For smaller projects, I still prefer building my own context files from scratch.
This might improve with updates, but in its current state, it's not ready for production use.