In a world that prioritizes precise user control over the output text, how do you justify the value of relinquishing such control even to provide actions the user would have made already? It only takes a single bad edit to make the user lose all trust.
yeah bad edits are especially worse when we keep building on top of it.
Our first take on solving this is with rollbacks .. which allows you to delete edits up until a point in the conversation.. so if you do notice a bad edit you can do that.
after this, there is the proactive agent ..which checks it's work again and suggests more changes which it needs to do .. you can give feedback and guide it.
With llms we do loose a bit of control but I think the editor should work to solve this