Users discovered that when using Claude Code combined with Gemini-3-pro, a single conversation consumes up to 14.9k tokens, far exceeding expectations. This issue involves the AI Agent’s Context pre-reading mechanism, with users speculating that the project structure or code is pre-read at startup. They’re asking whether this high consumption is normal and how to reduce Context pre-reading volume to optimize resource usage. The article also compares the user experience between Claude and Cursor, shares stress tests on 2api, and seeks recommendations for MCP plugins. This discussion reveals performance challenges of AI tools in actual development, providing developers with practical optimization advice and offering technical depth and industry insights.
Original link:Linux.do
最新评论
I don't think the title of your article matches the content lol. Just kidding, mainly because I had some doubts after reading the article.
这个AI状态研究很深入,数据量也很大,很有参考价值。
我偶尔阅读 这个旅游网站。激励人心查看路线。
文章内容很有深度,AI模型的发展趋势值得关注。
内容丰富,对未来趋势分析得挺到位的。
Thank you for your sharing. I am worried that I lack creative ideas. It is your article that makes me full of hope. Thank you. But, I have a question, can you help me?
光纤技术真厉害,文章解析得挺透彻的。
文章内容很实用,想了解更多相关技巧。