I often have to tell cascade to "continue"
A.J. Brown
Ocassionally cascade will just stop performing a series of tasks. As an example, I'll ask it to write some tests for me. It'll write them, and then find some issues, fix those issues, and try again. Then it'll notice some more issues, tell me it's going to fix them, and then just fix them and stop.
If I say "continue" it resumes the task.
It's mostly annoying, but also breaks flow. If cascase is automatically running all of the tasks it's doing, I can task switch while it works to the result I want. But when I have to continually tell it to "keep going" I have to pay attention to it.
f
fatherearth2011@gmail.com
i integrate a project tracker, and then work off the tracker with some rules.
A
Abid Hussain
It's genuinely a waste of credits too when you have to stop a running process and then tell it - Process was left hanging, so had to stop it".
Honestly, I've lost count on how many credits got lost on doing this. I hope they fix this.
O
Olivier Buitelaar
This is literally becoming unbearable now... It is constantly hanging on "running" and nothing happens...
S
Saul Howells
i think it should continue until it has completed the task you have asked it to do.
S
Saul Howells
i think it should continue until it has completed the task you have asked it to do.
Miguel Tomas
I believe that this feedback can be closed after the last update. The new continue button fits the purpose
Max Jiang
We have some safeguards in place that stop Cascade from accidentally going on for too long. In Wave 8, we've included a "Continue" button (hotkeyed to opt/alt + enter) to be more transparent and informative when it's stopped for this reason.
Peter Elmered
Max Jiang That makes sense. It would be unnecessary to let the AI model go on for too long. I first thought that this was where the initial token was consumed and that you need to new token to continue, but that is not the case, right?
Tony Dehnke
Max Jiang Awesome, looking forward to trying it out.
William Daugherty
Max Jiang I'll bet that its not an intended safeguard as it happens sometimes within a minute or within 15 minutes. It seems to happen especially after some tool calls fail. However, if this is an intended safeguard, I as a pro user would LOVE to have control of it, say like specify a timer that I am responsible for and it can go in 15 minute increments up to an hour. If I run out of credits then I do, there can be a disclaimer.
C
Charlie Irwin
I'd like early access to the Timeline view or dev mode with execution graphs. I'm building complex multi-agent workflows using Task Master and MCPs, and better task visibility is essential.
I
Ian Kleinfeld
Agreed. I suspect this has something to do with the problems from https://windsurf.canny.io/feature-requests/p/use-a-scratch-disk-for-chat-speed-history-access
O
Omkar Pathak
Sorry but this behavior is getting pretty annoying. It not just breaks flow but the model does not pick up well. There are also several tool call errors along the way. It feels like I'm losing prompt credits to just say "pick up from the last step and continue".
This wasn't a big problem before but now my codebase is getting bigger with O(5k++) lines and it will only keep growing. Also Claude and gpt models seem to break much more than Gemini 2.5 pro. Please fix. Thank you!
Load More
→