If something is a repetitive habit that you can do almost without thinking, there is a good chance an AI could infer that entire chain.
I think what's more likely is that an AI based interface will end up being superior after it has had a chance to observe your personal preferences and approach on a conventional UI.
So both will still be needed, with an AI helping at the low end and high end of experience and the middle being a training zone as it learns you.
I think I wasn't clear enough -- these habits I'm talking about are things like "press cold water button, press start" or "press warm water button, press start" or "tap 'News' app grouping, tap 'NY Times' icon".
There's nothing to infer. The sequence is already short. There are no benefits from AI here.
But you raise a good point, which is that there are occasionally things like 15-step processes that people repeat a bunch of times, that the AI can observe and then take over. So basically useful in programming macros/shortcuts as well. But that still requires the original UI -- it doesn't replace it.
I don't know - the timer app on my oven is trivial too. But I always, always use Alexa to start timers. My hands are busy, so I can just ask "How many minutes left on the tea timer?"
Voice is not really clumsy, compared to finding a device, browsing to an app, remembering the interface etc.
Already when we meet a new app, we (I) often ask someone to show me around or tell me where the feature is that I want. Not any easier than asking my house AI. Harder really.
Hard to underestimate the laziness of humans. I'll get very accustomed to asking my AI to do ordinary things. Already I never poke at the search menu in my TV; I ask Alexa to search for me. So, so much easier. Always available. Never have to spell anything.
Everyone agrees setting timers in the kitchen via voice is great precisely because your hands are occupied. It's a special case. (And often used as the example of the only thing people end up consistently using their voice assistant for.)
And asking an AI where a feature is in an app -- that's exactly what I was describing. The app still has its UX though. But this is exactly the learning assistance I was describing.
And as for searching with Alexa, of course -- but that's just voice dictation instead of typing. Nothing to do with LLM's or interfaces.
Alexa's search is a little different - it's context-independent. I can ask for a search from any point in the TV app - in some other menu, while watching another show, heck even when the TV is turned off.
And when describing apps - I imagine the AI is an app-free environment, where I just ask those questions of my AI assistant, in lieu of poking at an app at all.
Most user interfaces already have a much finer granularity and number of options than your examples.
When taking a shower, I would like fine control over the water temperature, preferably with a feedback loop regulating the temperature. (Preferably also the regulation changes over the duration of the showering.)
Choosing to read the NY times indeed is only a few taps away, but navigating through and within its list of articles is nowadays done quite fast and intuitively thanks to quite a lot of UI advancements.
My point being, short sequences are a very limited set within a vast UI space.
People go for convenience and speed, oftentimes even if there's some accuracy cost. AI fulfills this preference, especially because it can learn on the go.
> When taking a shower, I would like fine control over the water temperature, preferably with a feedback loop regulating the temperature. (Preferably also the regulation changes over the duration of the showering.)
That exists, but it’s expensive because of the electronics and mechanics involved. There are so many interfaces with this exact problem.
You also almost certainly don’t want non-deterministic hallucination prone AI controlling physical systems.
Indeed, and to take the UI a step further, humans often prefer automation, if it works reliable. A complicated UI would become simple, just step into the shower.
There’s no complicated UI. You just turn a knob that sets a digital temperature readout.
If you want the shower to save your temperature preferences and start automatically, there’s no reason to build in a computer capable of running an AI.
But in reality you almost certainly don’t want a system like this because you don’t want an AI accidentally turning on your shower when you’re not home, when you do ok to clean it, or grab a razor, or when your toddler wanders in.
Granted an AI could try to determine intent, but it’s never going to get it 100% right. Which is why for physical systems like this you almost always want a physical button to signal intent.
It would become less expensive, using less sensors and actuators, when using the predictive and learning abilities of an ai. You can, for safety reasons, keep a mechanical temperature limiter in the loop.
Temperature can be measured in different ways. IR radiation and sound can be measured from a distance. The relationship between temperature at the source, of the water exiting the showerhead and time can be learned. Water can be heated in different ways. The valve could also be a pump. Our reaction to the temperature of the water can be sensed.
Who knows, AI can come up with simpler or cheaper solutions that did not cross our mind.
I would say, time will tell.
Prompt engineering and using multiple AI models in parallel might find ways to cancel out most hallucinations similar to how consensus-based replication works.
It might. If hallucinations are truly random and not correlated to anything shared between models. For example, something inherent to the data they are trained on. Given how locked down I think potential training data is going to become, and the amount of data required, I think that sharing data between models is almost guaranteed.
Also that sounds like an awful lot of computing power for everyday UIs. It also doesn’t solve the non determinism problem.
I totally get your point, but I think that AI will allow much "smarter" behavior. Where every appliance is an expert in doing what it is intended to do.
So sure, it will still have buttons, but those buttons are really just preset AI prompts on the backend. You can also just talk to your appliance and nuance your request however you want to.
A TV with a remote whose channel button just prompts "Next channel" but if you want you would just talk to your TV and say "Skip 10 channels" or "make the channel button do (arbitrary behavior)"
The shortcuts will definitely stay, but they will behave closer to "ring bell for service" than "press selection to vend".
I think what's more likely is that an AI based interface will end up being superior after it has had a chance to observe your personal preferences and approach on a conventional UI.
So both will still be needed, with an AI helping at the low end and high end of experience and the middle being a training zone as it learns you.