A Beginner No More: Extending My Journey With Claude and Product Thinking
- sathyavenkatesh
- Nov 18
- 5 min read
This post is a continuation of a series that began as a simple curiosity about what AI could do for a product manager. I started working on this primarily out of curiosity, attempting to use these workflows in my own founder's journey. What started as a small experiment has now grown into a practical, repeatable workflow that i can now use to think, plan and build. Today’s piece is about the another step in that journey: using Claude to generate a real product roadmap and understanding what it takes to work with AI as a serious partner. If you've been following my posts, you know i experimented with building a " Uber for tutors" App. This was just an idea i had toyed with in the past and i dont intend to build this idea into a product per se. My goal was 1) visualize the wireframes 2) fire up agents to do competitive analysis research and 3) get a complete roadmap for the product, all using Claude 4) document my hiccups along the way.
Step 1: Setting the Context
I began by launching Claude in CLI and asked it to start multiple agents that would complete competitive research for me. I broadly defined what i was looking from these agents. At that point I’d already structured a previous session with pertinent context (market, positioning, core features). I did ask it to pull context from a previous session, fire up 5 agents, each of which would look for a competitor

I spoke in Natural Language , and watched Claude execute these tasks in parallel. I gave claude complete freedom to choose the set of competitors - which is a change from how you would do this in the real world.
All PMs would or rather should have an idea of who their competitors are and would tell Claude specifically whom to research. Within minutes I had a usable reference set. No surprises there, isnt that what Claude is supposed to do anyway?
Step 2: Extract strengths, gaps and strategic opportunities
Once the exploratory phase completed, I asked Claude to summarize what it found:

Strengths that competitors have, gaps they are leaving open and strategic opportunities for our app. Example output (edited for clarity) included bullet-lists like:
3-tier verification system (competitors don’t have this structure)
Mobile-first design (better than Varsity/Preply’s buggy apps)
Clear trust signals (badges, ratings, stats on tutor cards)
Student-centric UX (next lesson countdown, favorites, clean interface) And what to add: transparent pricing & fair tutor pay, mandatory quality training for tutors, hybrid instant + scheduled model, parent dashboard, regional expansion strategy .
Then Claude identified our “biggest opportunity” as: Position as the ethical, transparent, quality-focused alternative to competitors plagued by billing complaints, poor quality, and tutor exploitation. Sounds good in theory, right ? This is where your real skills as a PM matter. How do you know what Claude says are the right opportunities ?
Well, you use a product opportunity framework , or simply ask the 4 - Wh questions- 1- What oportunity is this solving (Value proposition) 2- Whom is this solving the opoprtunity for (target) 3- why is Gyan Guru best suited (USP of your product) and How will you measure your success (KPIs). Yes Claude makes it easy to fire agents, but you cannot absolve yourself of validating and your own thought process either.
Step 3: Create a real world roadmap for this app execution phase
Having captured the competitor context and strategic opportunity, I asked Claude to create a roadmap. I told it what framework to use, and what i was looking for. Claude began generating phases: MVP, early market roll-out, scale features, regional expansion, continuous improvement. Each milestone framed with key tasks, metrics, dependencies, owners, timelines.

The structure was surprisingly close to what I would have written myself, which made it a good starting point for refinement. I then asked Claude to dump all the research into a markdown file and save it into a local directory. This step is important. If we treat AI work as disposable chat history we lose continuity. If we treat it as a work artefact, it becomes part of versioned product thinking.
Step 4: The unexpected Interruption
In theory, this style of working was great, however, i consumed more tokens than what i wanted to . I was not thinking tool efficiency with the task, but thinking of my time savings.
Wrong move. I hit my session limits by the time i had finished dumping my research into separate files. I am a pro user, so i was initially thrown off by this. Could i have incurred more tokens than what was assigned to a pro user ?

Claude would simply not consider anything i typed, including asking to confirm it if was a pro user (Login re-check), how can i purchase tokens and so forth. Since I hit the limit, I was curious to see how the plans have evolved. Over the past few months Anthropic has made several adjustments to how usage limits are handled for Pro and Max users. I have tried to summarize the changes in the table below, but use your own judgement on what this means to you.
Plan | Key Changes in Recent Months | Impact for PMs | Source |
Pro Plan | Five hour session windows. Usage based on message count, token length and attachments. Limits now enforced more tightly in long sessions. | Suitable for light to moderate product work. Heavy market research sessions will hit limits. | |
Max Plan (5x) | Higher session throughput. Designed for power users. | Better for extended workflow sessions like roadmap building, competitive studies, or large document uploads. | |
Max Plan (20x) | Highest tier for researchers and developers. | Useful for PMs who treat Claude as a daily partner for deep work, experimentation and product discovery. | |
Weekly Usage Enforcement | Introduced for heavy models like Claude Code. Applies soft weekly caps to prevent overuse. | PMs doing daily deeper work will need to pace tasks or upgrade tiers. |
Forced break later, i realized there are multiple things i can do to manage my token lifecycle, if i wanted only NLP processing plus research intensive tasks for Claude using my Pro Plan. Here are some of the suggestions i have for you
Be specific about the model you want to use. Claude gives Sonnet as the default but it doesnt balance for tokens . Haiku was a better option for straightforward tasks. Similarly, if you want deep reasoning , use Opus
Reduce verbosity of ask. For example: "Research all competitors and give me everything you can find about their features, pricing, history, team, funding, and create a comprehensive analysis" vs a focussed ask of " Research top 3 competitors. For each: features (5 bullets), pricing (1 table), key differentiator. (1 sentence). Keep reports under 500 words each."
Avoid using parallel agents (10X tokens) if not necessary. I could have simply used sequential and saved myself tokens
Progressive Depths . If you notice, i asked Claude to "R esearch all competitors" This is by nature task intensive. a better approach is to ask for a first pass. Evaluate. Proceed to deep dive.
By nature, Claude seems to be chatty but perhaps lesser than Chat GPT (My own opinions as such). Ask claude to be consise. Use words like "Brief" "Short Summary" etc
Closing Thoughts
Building a product roadmap inside Claude was more than a one off test. It became a view into how AI can sit beside us as a thinking partner. The multi agent research, the structured analysis and the speed of synthesis were genuinely helpful. But the session limit was a reminder that even the most powerful tools come with constraints. For product managers who want to use AI as part of their everyday craft the real learning is simple. Understand your tools, respect their edges and build processes that work around them.



Comments