But can it both say the word cluck, and make a clicking sound?
Is the idea that as these models grow in sophistication they can properly interpret (or produce) inflection, cadence, emotion that’s lost in TTS?
An STT model might misrecognize a word, but an audio LLM may understand the true word because of the broad context. A TTS model needs to guess the inflection and it can get it completely wrong, but an audio LLM could understand how to talk naturally and with what tone (e.g. use a higher tone if it's interjecting)
Speaking of interjection, an STT/TTS system will never interject because it relies on VAD and heuristics to guess when to start talking or when to stop, and generally the rule is to only talk after the user stopped talking. An audio LLM could learn how to conversate naturally, avoid taking up too much conversation time or even talk with a group of people.
An audio LLM could also produce music or sounds or tell you what the song is when you hum it. There's a lot of new possibility
I say "could learn" for most of this because it requires good training data, but from my understanding most of these are currently just trained with normal text datasets synthetically turned into voice with TTS, so they are effectively no better than a normal STT/TTS system; it's a good way to prove an architecture but it doesn't demonstrate the full capabilities
You can work around this by the way by sending the output through a checking stage.
So picture -> gpt4o -> out1, picture -> tesseract -> out2, out1,out2 -> llm.
Might work for sound too.
Latency is a big one due to batching. You can’t really interrupt the agent, which makes actual conversation more clunky. And yes, multimodal has better understanding. (I haven’t seen analysis of perception of emotions, has anyone seen analysis of this capability for GPT-O?)
However, there are some other potential fringe benefits here: improving the latency of replies, improving speaker diarization, and reacting to pauses better for conversations.
you can't put pure text with keyboard on a robot. it will become a wheeled computer.
actually this is a cool thing as a companion / assistant.
Yeah that's the point. Without punctuation, no one can tell what inflection my "really" above should have, but even if it'd been "Really?" or "Really!", there's still room for interpretation. With a bet on voice interfaces needing a Google moment (wherein, prior to Google, search was crap) to truely become successful (by interpreting and creating inflection, cadence, emotion, as you mentioned), creating such a model makes a lot of sense.