Hacker Newsnew | past | comments | ask | show | jobs | submit | hrudolph's commentslogin

Roo Code 3.45.0 Release Updates | Smart Code Folding | Context condensing

In case you did not know, RooCode is a Free and Open Source VS Code AI Coding extension.

## Smart Code Folding

When Roo condenses a long conversation, it now keeps a lightweight “code outline” for recently used files—things like function signatures, class declarations, and type definitions—so you can keep referring to code accurately after condensing without re-sharing files. (thanks shariqriazz!)

> * Documentation*: See [Intelligent Context Condensing](https://docs.roocode.com/features/intelligent-context-conden...) for details on configuring and using context condensing.


In case you did not know, RooCode is a Free and Open Source VS Code AI Coding extension.

## Worktrees

Worktrees are easier to work with in chat. The Worktree selector is more prominent, creating a worktree takes fewer steps, and the Create Worktree flow is clearer (including a native folder picker), so it’s faster to spin up an isolated branch/workspace and switch between worktrees while you work.

> * Documentation*: See [Worktrees](https://docs.roocode.com/features/worktrees) for detailed usage.

## Parallel tool calls (Experimental)

Re-enables parallel tool calling (with added isolation safeguards) so you can use the experimental “Parallel tool calls” setting again without breaking task delegation workflows.

## QOL Improvements

- Makes subtasks easier to find and navigate by improving parent/child visibility across History and Chat (including clearer “back to parent” navigation), so you can move between related tasks faster. - Lets you auto-approve all tools from a trusted MCP server by allowing all tool names, so you don’t have to list each one individually. - Reduces token overhead in prompts by removing a duplicate MCP server/tools section from internal instructions, leaving more room for your conversation context. - Improves Traditional Chinese (zh-TW) UI text for better clarity and consistency. (thanks PeterDaveHello!)

## Bug Fixes

- Fixes an issue where context condensing could accidentally pull in content that was already condensed earlier, which could reduce the effectiveness of long-conversation summaries. - Fixes an issue where automatic context condensing could silently fail for VS Code LM API users when token counting returned 0 outside active requests, which could lead to unexpected context-limit errors. (thanks srulyt!) - Fixes an issue where Roo didn’t record a successful truncation fallback when condensation failed, which could make Rewind restores unreliable after a condensing error. - Fixes an issue where MCP tools with hyphens in their names could fail to resolve in native tool calling (for example when a provider/model rewrites “-” as “_”). (thanks hori-so!) - Fixes an issue where tool calls could fail validation through AWS Bedrock when the tool call ID exceeded Bedrock’s 64-character limit, improving reliability for longer tool-heavy sessions. - Fixes an issue where Settings section headers could look transparent while scrolling, restoring an opaque background so the UI stays legible. - Fixes a Fireworks provider type mismatch by removing unsupported model tool fields, keeping provider model metadata consistent and preventing breakage from schema changes. - Fixes an issue where task handoffs could miss creating a checkpoint first, making task state more consistent and recoverable. - Fixes an issue where leftover Power Steering experiment references could display raw translation keys in the UI. - Fixes an issue where Roo could fail to index code in worktrees stored inside hidden directories (for example “~/.roo/worktrees/”), which could break search and other codebase features in those worktrees.

## Provider Updates

- 5 provider updates — see full release notes for more detail.


# Roo Code 3.43.0 Release Updates

@everyone This release updates Intelligent Context Condensation, removes deprecated settings, and fixes export and settings issues.

## Intelligent Context Condensation v2

Intelligent Context Condensation runs when the conversation is near the model’s context limit. It summarizes earlier messages instead of dropping them. After a condense, Roo continues from a single summary, not a mix of summary plus a long tail of older messages. If your task starts with a slash command, Roo preserves those slash-command-driven directives across condenses. Roo is less likely to break tool-heavy chats during a condense, which reduces failed requests and missing tool results.

Settings changes: the Condense prompt editor is now under *Context Management* and Reset clears your override. Condensing uses the active conversation model/provider. There is no separate model/provider selector for condensing.

## QOL Improvements

- Removes the unused “Enable concurrent file edits” experimental toggle to reduce settings clutter. - Removes the experimental *Power Steering* setting (a deprecated experiment that no longer improves results). - Removes obsolete diff/match-precision provider settings that no longer affect behavior. - Adds a `pnpm install:vsix:nightly` command to make installing nightly VSIX builds easier.

## Bug Fixes

- Fixes an issue where MCP config files saved via the UI could be rewritten as a single minified line. Files are now pretty-printed. (thanks Michaelzag!) - Fixes an issue where exporting tasks to Markdown could include `[Unexpected content type: thoughtSignature]` lines for some models. Exports are now clean. (thanks rossdonald!) - Fixes an issue where the *Model* section could appear twice in the OpenAI Codex provider settings.

## Misc Improvements

- Removes legacy XML tool-calling code paths that are no longer used, reducing maintenance surface area.

## Provider Updates

- Updates Z.AI models with new variants and pricing metadata. (thanks ErdemGKSL!) - Corrects Gemini 3 pricing for Flash and Pro models to match published pricing. (thanks rossdonald!)


Context condensation only stays safe when it behaves like a controlled artefact. Preserve the active directives, freeze a small set of must-keep facts, and treat the summary as versioned output with a stop rule when it drops constraints. That turns “near the limit” from random truncation into repeatable workflow.


QOL Improvements

- Adds a usage limits dashboard in the OpenAI Codex provider so you can track your ChatGPT subscription usage and avoid unexpected slowdowns or blocks. - Standardizes the model picker UI across providers, reducing friction when switching providers or comparing models. - Warns you when too many MCP tools are enabled, helping you avoid bloated prompts and unexpected tool behavior. - Makes exports easier to find by defaulting export destinations to your Downloads folder. - Clarifies how linked SKILL.md files should be handled in prompts.

Bug Fixes

- Fixes an issue where switching workspaces could temporarily show an empty Mode selector, making it harder to confirm which mode you’re in. - Fixes a race condition where the context condensing prompt input could become inconsistent, improving reliability when condensing runs. - Fixes an issue where OpenAI native and Codex handlers could emit duplicated text/reasoning, reducing repeated output in streaming responses. - Fixes an issue where resuming a task via the IPC/bridge layer could abort unexpectedly, improving stability for resumed sessions. - Fixes an issue where file restrictions were not enforced consistently across all editing tools, improving safety when using restricted workflows. - Fixes an issue where a “custom condensing model” option could appear even when it was no longer supported, simplifying the condense configuration UI. - Fixes gray-screen performance issues by avoiding redundant task history payloads during webview state updates.

Misc Improvements

- Improves prompt formatting consistency by standardizing user content tags to <user_message>. - Removes legacy XML tool-calling support so new tasks use the native tool format only, reducing confusion and preventing mismatched tool formatting across providers. - Refactors internal prompt plumbing by migrating the context condensing prompt into customSupportPrompts.

Provider Updates

- Removes the deprecated Claude Code provider from the provider list. - Enables prompt caching for the Cerebras zai-glm-4.7 model to reduce latency and repeat costs on repeated prompts. - Adds the Kimi K2 thinking model to the Vertex AI provider.


Try this and watch it supercharge! https://docs.roocode.com/features/boomerang-tasks/


For those of you who are not familiar with Roo Code, it is a free 'AI Coding Agent' VS Code extension. Here are the latest release notes!

Sonnet 3.7 shows notable improvements in: - Front-end development and full-stack updates - Agentic workflows for multi-step processes - More accurate math, coding, and instruction-following


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: