Do massively parallel research with Claude Cowork

Do massively parallel research with Claude Cowork
Credit: Nano Banana 2. Prompt: Claude. Pre-prompt. Troy Angrignon. What a team! LOL

Today I needed to answer a big, sprawling question for a client engagement. The kind of question where you know there are at least six distinct bodies of knowledge you need to absorb before you can make an architecture decision — and each one is deep enough to be its own rabbit hole.

Instead of manually setting up six parallel chat tabs and setting up all the reports manually, I used Claude Cowork to do it for me, by having it drive Chrome.

Here's exactly what happened — the wins, the friction, and failure at the end.

The Problem

I was designing a technical system and needed to understand the current landscape across multiple dimensions before committing to an approach. The research covered foundation model capabilities, specialized platform comparisons, optimization techniques, pipeline architecture patterns, output reliability, and real-world production case studies. Six distinct research domains, each requiring hundreds of sources to cover properly.

Running them sequentially through Deep Research would take hours of wall-clock time. But they were completely independent of each other — perfect for parallelization.

The Solution: Claude Cowork + Chrome Automation

I was already working in a Claude Cowork session where I'd built a working proof-of-concept for the client. The natural next step was the research phase to inform the production architecture.

Step 1: Write the Prompts

I started with one massive research prompt covering everything. Claude correctly told me it was trying to do too much. I said:

"Instead break this up into as many distinct reports as you need. Each should stand alone. I'll run them all in parallel."

Claude wrote six focused, standalone prompts — each detailed enough to drive a thorough Deep Research report, each completely independent. I had Claude append "Do not ask any follow-up or clarifying questions" to each one so they'd run unattended.

This decomposition step matters more than you'd think. The key is to bracket the entire problem and solution space — cover the problem landscape (how others frame this problem), the solution landscape (what approaches exist), the evaluation criteria (how to compare), and the operational reality (what actually works in production vs. what's theoretical). Each prompt needs to be fully self-contained because the Deep Research agent has zero context from the other prompts.

Step 2: Automate the Browser

This is where it gets interesting. I asked Claude:

"Can you open 6 tabs in my Chrome browser, navigate to Claude.ai, insert these prompts, and select Research mode?"

Claude Cowork has access to Chrome through the "Claude in Chrome" integration. It can open tabs, navigate pages, click buttons, type text, and interact with web UIs — including Claude.ai itself. Yes, that means Claude controlling Claude through a browser. Claudeception.

Here's the workflow Claude executed for each tab:

  1. Create a new tab and navigate to claude.ai/new
  2. Select the model — click the model dropdown, select Opus 4.6
  3. Enable Research mode — click the "+" toggle menu button, check the "Research" checkbox
  4. Paste the prompt — use JavaScript clipboard paste to insert the full prompt text
  5. Click Send — submit the research request
  6. Handle the connectors popup — after submission, Claude.ai shows a popup asking about tool integrations. Click "Disable all tools" then "Confirm"

Step 3: Walk Away

Once all six tabs were running, each Deep Research agent was independently searching 500–600+ sources and writing its report. Total hands-off time: about 10–15 minutes per report, all running simultaneously.

The result: six comprehensive research reports, each drawing from hundreds of sources, covering the full landscape I needed. What would have been a week of reading compressed into about 15 minutes of wall time.

What Actually Worked Well

The prompt decomposition was excellent. Having Claude break one massive research question into six standalone prompts was the right call. Each report went deep instead of skimming. One report pulled from 539 sources with specific benchmarks, pricing comparisons, and failure mode analysis. Another covered 570 sources on a completely different dimension.

Chrome automation for repetitive UI tasks is powerful. The workflow for each tab was identical — model selection, Research mode, paste, submit, handle popup. Claude executed it six times without me touching the keyboard. This is exactly the kind of tedious, repetitive work that browser automation should handle.

JavaScript clipboard paste solved input issues. Direct typing via Chrome automation had issues with special characters. Claude figured out it could dispatch a ClipboardEvent with the full prompt text, which worked perfectly every time.

Where It Got Messy

The connectors popup was a recurring trap. After every Research submission, Claude.ai pops up a dialog asking about tool integrations with three toggles (for whatever services you have connected — in my case Excalidraw, Gmail, and Google Calendar). Claude had to learn — through my corrections — that this popup appears every single time and requires clicking "Disable all tools" followed by "Confirm." The first few attempts, Claude either missed the popup entirely, opened sub-menus instead of toggling, or clicked "Cancel Research" instead of "Confirm."

Coordinate-based clicking is fragile. Early on, Claude was clicking UI elements by screen coordinates. This led to misclicks — most critically, clicking "Cancel Research" when it meant to click "Confirm" on one tab, which killed the running research. The fix was using reference-based element identification (the find tool) to locate buttons by their semantic role, then clicking by reference ID. Much more reliable.

Tab 1 was submitted without Research mode. Claude submitted the first prompt before we'd figured out where the Research toggle was (it's under the "+" button, not the model dropdown). That tab ran as a regular Opus 4.6 chat instead of a Deep Research report. We had to create a 7th tab and re-run it correctly.

Context window exhaustion killed the session. This is the big one. The back-and-forth of Chrome automation — screenshots, element finding, clicking, error recovery, re-clicking — consumed enormous amounts of context. The conversation hit the context limit three times and had to be resumed from compressed summaries. By the third resumption, when I asked Claude to read all six completed research reports and synthesize them into a strategy document, the session was too bloated to continue. It died with "This conversation is too long to continue."

The Context Window Problem Is Real

The Chrome automation workflow is context-hungry by nature. Every screenshot is a large image token. Every element-find call returns a list of page elements. Every click-verify-screenshot cycle adds thousands of tokens. Multiply that by six tabs, each requiring 5–6 interactions, plus error recovery... and you're burning through context fast.

The session survived long enough to submit all six prompts and confirm they were running. But it couldn't survive long enough to also read the results and write the synthesis. That required a fresh conversation.

Lesson learned: If you're doing heavy browser automation, plan for the context budget. Submit the prompts in one session, then start a fresh session to read and synthesize the results.

Should This Have Been a Skill?

Absolutely — and I'm kicking myself for not building it first. A Cowork skill would have:

  1. Pre-loaded the UI element references — no discovery phase, no "where's the Research button?" back-and-forth that ate context
  2. Skipped unnecessary screenshots — the biggest context drain was the verify-screenshot-after-every-click pattern. A skill that knows the flow can click blind and only screenshot on errors
  3. Used reference-based clicks from the start — no coordinate guessing, no Cancel/Confirm misclicks
  4. Budgeted context explicitly — a skill could even warn "this will consume ~X tokens, plan to start a fresh session for synthesis"

The pattern is clean enough to encode: decompose → open tabs → (select model → enable Research → paste → submit → handle popup) × N. The connectors popup handling alone would have saved 15+ minutes of coaching if it was pre-documented in a SKILL.md.

So I built the skill after the fact.

How You Can Do This

If you want to run parallel Deep Research reports from a Claude Cowork session:

Prerequisites:

  • Claude Pro or Team plan (for Cowork access and Deep Research)
  • Chrome browser with the "Claude in Chrome" integration enabled
  • Enough patience for the first run while Claude learns the UI

The process:

  1. Start a Cowork session and describe your research need. Ask Claude to break it into independent sub-questions, each as a standalone research prompt. Push for prompts that bracket the full problem and solution space — not just "compare the options" but also "what are the failure modes," "what are people actually running in production," and "what does the cost model look like at scale."
  2. Ask Claude to open Chrome tabs and submit them. Say something like: "Open 6 tabs in Chrome, navigate to claude.ai, select Opus 4.6, enable Research mode under the + button, paste each prompt, submit, and disable the connectors popup."
  3. Coach Claude through the first tab. It will likely need guidance on where Research mode is and how to handle the connectors popup. Be specific: "Research is under the + button" and "Disable all tools, then click Confirm, not Cancel Research."
  4. Let it run. Once Claude has the pattern down (usually after 1–2 tabs), it handles the remaining tabs on autopilot.
  5. Save the reports manually. Each completed research report generates a Document artifact in the Claude.ai tab. You can download them as PDF or Markdown from the dropdown next to the "Copy" button. The programmatic download triggers a native OS print dialog that Cowork can't control, so you'll need to click Save yourself.
  6. Start a fresh session for synthesis. Don't try to read and synthesize all the reports in the same session that did the browser automation. The context will be too bloated. Start a new Cowork session, point it at the saved reports, and ask for the synthesis there.

The Bigger Picture

What I did here is a pattern that's going to become routine: using AI to orchestrate other AI sessions.

The "inner" Claude instances (the six Deep Research agents) each did 10–15 minutes of focused, deep research across hundreds of sources. The "outer" Claude (the Cowork session) handled the mechanical task of setting them all up and launching them in parallel.

The friction points — connectors popups, model selection, Research mode toggles — are all UI artifacts that will presumably get smoothed out over time. The core capability is already there: you can use Claude to drive Claude, turning a sequential research workflow into a parallel one.

The context window limit is the real constraint today. Browser automation is token-expensive, and a session that does heavy automation doesn't have much room left for heavy analysis. The solution is simple: use separate sessions for orchestration vs. synthesis.

For now, this workflow saved me hours. Six deep research reports, each pulling from 500+ sources, all running simultaneously while I went to make coffee. The future of knowledge work isn't doing the research yourself — it's knowing how to decompose the question and orchestrate the research at scale.

Read more