AI Tool Switching Is Stealth Friction – Beat It at the Access Layer
Has your team’s sprint velocity actually improved since you approved all those AI coding tools? If not, recent research by JetBrains and UC Irvine shows your developers may be facing a new dimension of context switching that resists the usual fixes.   The key findings were that most AI-assisted developers switched in and out of their […]
---
[Image: Ai logo]
Supercharge your tools with AI-powered features inside many JetBrains products
Follow
- Follow:
- RSS RSS
Explore More
Insights
JetBrains AI
AI Tool Switching Is Stealth Friction – Beat It at the Access Layer
[Image: Colette Des Georges]
Colette Des Georges
Has your team’s sprint velocity actually improved since you approved all those AI coding tools?
If not, recent research by JetBrains and UC Irvine shows your developers may be facing a new dimension of context switching that resists the usual fixes.
The key findings were that most AI-assisted developers switched in and out of their IDEs more but 74% of those surveyed didn’t notice it. When context switching doesn’t feel like context switching, behavioral policies won’t catch it.
Consolidating AI tools would catch it but at the cost of flexibility. Model capabilities evolve constantly. Locking into one vendor limits your team’s ability to learn, experiment, and stay competitive.
The good news is that there’s a solution that sidesteps both challenges – consolidating the access layer.
Here’s the research behind it, why it works, and how to apply it.
Developers complain about switching, just not this kind
In general, developers are outspoken about context switching killing productivity. Atlassian’s State of Developer Experience Report 2025 found developers citing switching context between tools as one of their biggest drags on productivity.
At the same time, developers report record productivity thanks to an ever-increasing array of AI tools. In the 2025 DORA State of AI-Assisted Software Development Report, respondents said that AI had a positive impact on delivery throughput, code quality, and almost every other key performance outcome.
Paradoxically DORA also found no relationship between AI adoption and reduced friction or burnout. The organizational wins weren’t translating to a lighter day-to-day experience.
This disconnect between experience and performance points to something deeper. When researchers combine self-reported perceptions with objective behavioral data, the gap becomes clear.
- In the JetBrains/UC Irvine study mentioned above, 74% of surveyed AI-assisted developers didn’t notice an increase in their switching. Telemetry on 151 million IDE window activations across 800 developers told a different story. Over the two-year study period, AI users’ monthly window switching trended upward while non-AI users’ did not. This divergence was mostly invisible to those experiencing it. Conducted from October 2022 to October 2024, the research spanned ChatGPT’s launch and the initial scramble to adopt AI coding tools.
74% said switching hadn’t gone up.
Telemetry disagreed.
- Experienced open-source developers in a 2025 METR study believed AI tools made them 20% faster. Screen recordings showed the opposite.
The solution isn’t measuring or managing – it’s architectural. And there’s a proven pattern for architectural solutions to developer friction.
**The platform-engineering lesson: Consolidation reduces cognitive load**
Platform engineering is all about building internal tooling and infrastructure that lets developers self-service what they need without hitting speed bumps like tickets or approvals. The goal is to create “golden paths” that make the right ways the easy ways.
Traditionally, platform engineering has focused on the “outer loop” of everything after git push. This includes CI/CD pipelines, deployment automation, infrastructure provisioning, and security scanning.
AI tools, on the other hand, fragment the “inner loop” of everything before git push. GitLab’s 2025 Global DevSecOps Report found that 49% of development teams use more than five AI tools across use cases like code generation, testing, and documentation.
Standardization was the top motivation for platform initiatives according to Weave Intelligence’s State of AI in Platform Engineering 2025 report, but standardizing around a single AI tool doesn’t work when different models are better at different tasks.
Reducing developers’ cognitive load was the second-highest motivation. Apply that principle to AI tools: consolidate the access layer, not the options.
One environment, multiple AI tools
Since our study data was finalized in 2024, we’ve shipped two features that make JetBrains IDEs the consolidated access layer for your team’s AI tools of choice:
Bring Your Own Key (BYOK) lets your team use OpenAI, Anthropic, or any OpenAI-compatible provider with existing API keys. You maintain cost visibility through provider dashboards while developers access models directly in the IDE.
No browser tabs required. LLMs work inside the IDE.
Agent Client Protocol (ACP) support means any ACP-compatible coding agent can work within JetBrains IDEs. ACP is an open standard we’re partnering with Zen on to ensure agents function across editors without vendor lock-in. The recently launched ACP Registry makes finding and configuring agents quick and easy.
All ACP-compatible agents are available in the IDE.
Takeaway
AI-related switching doesn’t surface the same way as shifts between meetings, projects, or traditional tools. Developers notice it less, so they report it less. Behavioral policies can’t apply to what isn’t visible.
The fix is architectural, not managerial. In platform engineering, this principle applies to post-commit workflows. Apply it to pre-commit AI workflows by standardizing where developers access the tools: in the environment where they already write, test, and debug code.
AI in IDEs
AI in Software Development
- Share
Prev post The Missing Link Between AI and Business Value
[Image: image description]
Discover more
Are We Having the Wrong AI Dreams?
This opinion piece by JetBrains’ Team Lead in AI Development Experience reflects on key takeaways from NeurIPS 2025, a major AI research conference. It explains why these insights matter and considers related signals emerging from other recent research.
[Image: Anton Bragin]
Anton Bragin
Enhanced AI Management and Analytics for Organizations
We’re introducing the JetBrains Console, which provides enhanced AI management and analytics for organizations, including new capabilities to manage, observe, and control AI usage and costs across teams.
[Image: Viktor Kiselev]
Viktor Kiselev
Why Diffusion Models Could Change Developer Workflows in 2026
Diffusion models, and in particular diffusion large language models (d-LLMs), operate differently from current coding assistants. Unlike autoregressive models, which generate token by token and line by line in a strict left-to-right sequence, d-LLMs condition on both past and future context.
[Image: Damian Bogunowicz]
Damian Bogunowicz
Why Trust Leads and Speed Follows in Agentic Design
Every developer recognises the trade-off. You can take the shortcut that delivers today but breaks tomorrow, or choose the slower approach that earns lasting confidence.
[Image: Conrad Schwellnus]
Conrad Schwellnus
---
[Original source](https://blog.jetbrains.com/ai/2026/02/ai-tool-switching-is-stealth-friction-beat-it-at-the-access-layer/)