Google voice recognition AI models go in-device
Quote from Ndubuisi Ekekwe on May 8, 2019, 4:34 PMGoogle voice recognition AI models go in-device, removing latency in the process. By storing the models in the phone, not in the cloud, latency associated with networks will go thereby making the conversations more natural.
Google has managed to shrink its voice recognition models down from hundreds of gigabytes to half a gigabyte, making them small enough to fit right on a phone.
By storing it locally, they’re able to eliminate the latency involved with the back-and-forth pings to the cloud, making conversations with Assistant almost instantaneous. As it’s running on the device, it’ll work even in airplane mode. The company showed off the new speed by firing off voice requests rapid fire, with very little delay between commands (like “Call me a Lyft” or “Turn on my flashlight”) and their resulting actions.
Google voice recognition AI models go in-device, removing latency in the process. By storing the models in the phone, not in the cloud, latency associated with networks will go thereby making the conversations more natural.
Google has managed to shrink its voice recognition models down from hundreds of gigabytes to half a gigabyte, making them small enough to fit right on a phone.
By storing it locally, they’re able to eliminate the latency involved with the back-and-forth pings to the cloud, making conversations with Assistant almost instantaneous. As it’s running on the device, it’ll work even in airplane mode. The company showed off the new speed by firing off voice requests rapid fire, with very little delay between commands (like “Call me a Lyft” or “Turn on my flashlight”) and their resulting actions.