- product3 newsletter
- Posts
- How I currently AI as a product person
How I currently AI as a product person
A field report (partially written by the tools it's about)

Back in 2022, one of my New Year's resolutions was to increase my typing speed. I wasn't slow, but I wanted to be faster. I practiced on typing.com, saw some improvement, but eventually plateaued. My goal was to close the gap between thought and text. It never quite happened.
In early 2025, I discovered WisprFlow and the problem disappeared. Not because I typed faster, but because I significantly reduced my time typing. Dictation changed how I work. Actually, a big portion of this post was dictated through Wispr Flow.
That got me thinking about all the other tools I've picked up. This past year, I had the chance to consult on two products and take on a new role. Three fresh starts. Each time, I found myself setting up my workflow from scratch. And each time, I noticed I was doing the same things. That made me realize I have a stack now.
This is not a tutorial or a guide on how to use AI. It's not a showcase of the "perfect setup", nor is it the most optimal one. It's a field report on what works for me. So here it is:
Humble beginnings
It started with ChatGPT in early 2023. Custom GPTs specifically. It wasn't as good as what we have today, but it provided a lot of value. I've been using the concept of Custom GPTs ever since - first ChatGPT, and for the last year and a half - Claude Projects.
Each time I start fresh, I set up the same thing: a project loaded with context across several areas: company details and strategic goals, product vision, current feature set and roadmap, user insights and learnings, team structure and dynamics, technical details of how the product works (more on that later), client relationships and pipeline, revenue and financial metrics and competitive landscape.
Once I feel confident I've covered the basics, I start having conversations. Where are my blind spots? Does any of this contradict itself? What should I dive into next? Who should I talk to? This gives me a map of what I'm missing, and then I either figure out how to get it (person, doc, code) or I mark it as a question mark that needs to be discussed with the team.
Having some conversations with a project loaded with context lets me structure that context better for the LLM, and it also helps me build a clearer mental structure for how I think about the product, its current state, and where it should go next.
But more importantly, this base project becomes the foundation for what I call my PM sidekicks projects.
Once I fine-tune the context and structure it in a more meaningful way, I use it to create separate projects for different purposes - one for strategy, one for writing issues, one for discovery, one for GTM, and one for customer feedback. Each has its own prompt tailored to that task type, but they all share the same base fine-tuned context.
Here is the prompt to my sidekick for writing issues. Feel free to drop a comment if you want to check out the rest of the sidekicks.
That's the foundation. But the day-to-day isn't just strategy and tickets. It's meetings, communication, understanding technical details, and showing stakeholders what you mean. Here's what else I use.
Staying present
I'll be honest, initially, I was skeptical about note-taking tools. Maybe that’s because the first encounter I had with such an application was the annoying read .ai thingy I saw in some meetings, which then provided an unstructured chunk of text I didn’t like reading and didn’t find particularly useful. However, I gave Granola a shot (as it was part of Lenny's bundle) and was pleasantly surprised.
Before, I'd sit in meetings half-listening, half-typing. Did I catch that action item? Did I write down what was promised to me? That made it challenging to be fully present in the conversation.
Granola changed that. Now I stay present in conversations. I do weekly recaps and summaries, and I ask it to evaluate how well I articulate my thoughts. They also added recipes - templates like Nikita Bier's "will this product idea go viral" or leadership coaching based on the Mochary Method. I haven't tried any of those yet, but I like that they're expanding what a meeting tool can do.
A very concrete example of my day-to-day is my biweekly product updates. Every two weeks, I send a product update with the following categories: delivered, WIP, client pulse, and extras. It goes out to stakeholders and team members. Great practice for keeping people informed, giving kudos, and for my own mental checkpoint on progress. The same goes for my biweekly call with leadership covering roadmap, strategy, and pipeline. Same approach: gather context, let Claude help me structure an agenda.
Before, I did this manually. Now I take all my weekly calls from Granola, all Linear tickets worked on in the last two weeks, put them into my base PM sidekick project in Claude, and get a structured output. Much faster.
Increasing your throughput
I mentioned Wispr at the start. It's worth expanding on. I'd tried dictation before - the built-in Mac stuff was never good enough. Wispr is different. It picks up what I want to say, removes filler words, and learns from my edits.
I use it for longer prompts, PRD outlines, and explaining my thoughts before asking an LLM to structure them. The skill I couldn't improve didn't matter in the end. Dictation is now core to how I work.
Code
I don’t come from an engineering background, and as a PM, I need to understand how the product works. Before, this meant going through files, asking engineers to explain things, and slowly piecing together how systems work. It took time.
Cursor changed that. I use the Ask mode constantly. How does this work? Where are the dependencies? How are these connected? I get a deep understanding of a codebase much faster than before, and I don’t need to constantly bother engineers (despite knowing how much they LOVE that hehe).
But I don’t use Cursor just for understanding. I also use it for prototyping. This is my current flow: I create an actual issue with an overview, reasoning, expected behaviour, design, flow, and acceptance criteria. Then I give it to Cursor exactly like I'd share it with a developer in Linear or Jira. Most of the time recently, it one-shots it. From there, I create a screenshot or even a branch to showcase the functionality to engineering in order to give them an idea of what needs to be implemented.

Linear issue -> Cursor -> result
Depending on your organization, as a PM, you can open PRs for small fixes or copy changes. Why not implement features as well (like the examples above) if you feel confident. @linear has a great integration with Cursor for this.

Of course, this depends on your relationship with the engineering team and how engineering work is usually handled. So far from my experience and talking to fellow devs - smaller companies tend to be more open to this, however that's a big no-no in bigger enterprises.
New ideas
Sometimes I want to show something, not describe it. When it's a new feature, a new screen, or a new project, often it makes more sense to start fresh and highlight the functionality and vision rather than trying to build it inside an existing codebase with Cursor.
I've tried Bolt, Lovable, and v0. For me, v0 works best. The UI style fits my taste, and it handles context well. So I stuck with it.
A concrete example: I was working on a new analytics page for a DeFi product and instead of wiring it up in the codebase, I used v0 to create the concept and scaffolding. The goal wasn't working code - it was to agree on what metrics we wanted to show, how interacting with the analytics dashboard should feel like and what would be the core workflows. Much faster to align on vision first, then get to the nitty-gritty design details.
My usual flow: explain what I want to my issues sidekick -> chug it in v0 -> move to Cursor to refine (if needed) -> deploy to Vercel. It works well for prototyping something that doesn’t need specific product styling or refined functionality.
What didn’t stick
Not everything works for everyone. That's fine.
ChatPRD didn't stick for me. I tried it, but couldn't find the balance between getting a structured document with enough detail versus something lighter. I already have my own structure for initiatives and projects which works well for me so I didn’t have much incentive to push it and get more out of it.
Same with Claude Code. I tried it, but I was already deep into the Cursor workflow. Part of it is control. In many cases, I'm examining an existing codebase rather than creating something new. Cursor feels better for that. I can switch models and modes quickly and I can review files - the UI lets me do that much faster. Based on my experience Claude Code seems better suited for building new things from scratch.
Here's the thing though - I learned not to permanently discard tools. Models improve. What didn't work six months ago might be exactly what you need tomorrow. As noted above, I've experienced this multiple times. Early on, models worked okay but not great. As new models came out, they got better and better for various types of tasks.
What’s next
And that’s why I purposefully set aside time to test out new things and retest ones that didn’t work for me before. Here is what’s on my list right now:
A unified sidekick experience - my PM sidekicks all hold the same context, but they're separate. There is room for improvement here. Something more unified. A router, basically. I say what I need - a diagram, a go-to-market idea, a Linear issue (with MCP to directly open it), a strategy question - and it picks the right sidekick and runs with it. Prototyping too - describe a feature, route it to Cursor or v0. One place to start, and it figures out where to go. That’s the vision.
For Cursor, I keep repeating myself. Every prototype, I need to specify that I don’t want any working functionality, just UI. I want to create rules so I don't have to say that every time and keep the output focused on what I actually need - a visual to show, not something that works end-to-end. Same with running things locally. Right now, if I want to spin up a new codebase, I often need to connect to various services and endpoints. I'm explicitly asking it to mock those so I can see a screen locally without running the whole stack. I need to figure out if a rule or a hook is the right approach here.
I recently started running Clawdbot (or now @moltbot) - an open-source project that gives you a persistent AI assistant on your own VPS (or mac mini 🤠🤠🤠), connected to Telegram, with access to your tools and files. Setup wasn't plug-and-play. I spent some time on security alone as there are quite a few attack vectors to consider. Two things sold me on going through the setup. First, the promise of proactiveness - an assistant that doesn't just wait for prompts but reaches out when something needs attention. I haven't fully experienced that yet, but it's the reason I bothered. Second, the sheer range of what it can access - you connect your tools and just tell it what to do. And it all happens through Telegram, which I use daily. No new app, no dashboard, no context switch. You text it like you'd text a friend, and it does the thing. So far it's been great for the quick stuff - summarizing YouTube videos, pulling my X timeline and giving me a read on what's trending. But I'm still in the shallow end. The real test is the serious workflows: calendar reminders, nudging me to write more for the newsletter, flagging things I'd otherwise miss.
Another item on my list is to explore Miro's new AI features. Miro is one of my favorite products. I use it for architecture diagrams, user flows, and generally making sense of things visually. Diagramming helps me think, and it also helps when validating ideas with others - showing a diagram often lands better than explaining. They've added AI features recently and I'm curious how they'd fit into that workflow.
Finally, Claude Code. I want to give it another shot. With Opus and all the recent hype, this feels like one of those times where something that didn't click before might click now. I'm also curious about AskUserQuestionsTool (seems useful for building something from scratch where you want more back-and-forth) and ralph (fomo based on all the hype on X).
I have a mobile app idea I want to try building and it seems like the perfect opportunity to try it out. But first I need a more detailed PRD that captures the technical design and functionality questions I have in mind. Might be a good chance to revisit ChatPRD too.
Closing
I started writing this to solidify my own process. Turns out, putting it into words helped me see what's working and what I still want to figure out. This isn't the final version - tools change, models improve, and I'll probably look back at this in six months with a different setup.
If this was useful, share it with someone who might find it helpful. And if your stack looks nothing like mine - even better. What's in yours? What have you tried and dropped? The abandoned tools are often the more interesting conversation. Drop a comment.