A few weeks ago, I attended Qualcomm’s Beijing press event, where it announced the Snapdragon 710, and elegantly illustrated its vision for a future filled to the brim with on-device artificial intelligence.
I’ll be the first to admit AI has become somewhat of a buzzword as of late. As consumer interest in this technology has increased, so has its appearance in smartphone marketing materials.
LG’s recently introduced ThinQ product branding is a great example. The ThinQ branding represents how keen smartphone manufacturers can be to exploit the AI trend for the illusion of a superior product.
To be clear, the flagship LG G7 ThinQ definitely includes some AI features. Users will probably see it most when using the camera, which analyzes the scene and subject for better image quality. That sounds great, but when my colleague Lanh Nguyen reviewed the LG G7 ThinQ, he actually preferred the camera without AI. Ouch.
The marketing fluff surrounding AI extends throughout the industry. That’s quite unfortunate for a few reasons. Most remarkably, it makes consumers likely to view AI more as a gimmick than a developing technology. This will undoubtedly come back to haunt smartphone manufacturers as AI implementations advance.
Most people don’t know where AI begins and where it ends
It’s easy to see how marketing fluff is contributing to a more general sense of confusion surrounding AI. The industry is constantly telling consumers how great AI is, but look past the initial nods and smiles and it becomes apparent most people don’t know where AI begins and where it ends.
Behind the buzz
If we ignore the feel-good advertising and ill-timed branding, we can gain a better understanding of Al. Anything that makes a machine seem like a human could be classified as a product of artificial intelligence. More precisely, AI could be anything that appears to have human-like intelligence.
Much of computer science has focused on harnessing the power of computers rather than any sort of intelligence. Almost everything that happens when you interact with a computer happens as a result of detailed pre-programmed instructions. Sure, computers can check conditions and branch to other instructions, but they can’t extrapolate their own instructions, right?
The main subset of AI is machine learning, which takes a bunch of data, develops numerous independent behavior models, and picks the model that behaves closest to what we desire. One of the ways we can develop a machine learning model is by using Deep Learning. With this, we train a neural network. Today’s neural networks are not structurally unlike the human brain, they’re just around one percent as efficient.
More on this: Artificial Intelligence vs Machine Learning
With this knowledge, it becomes far easier to see the power and the limitations of modern artificial intelligence.
The current state of AI
There are a few areas where you can interact with AI today. Products like Google Assistant and Google Lens rely on AI to provide answers that would be impractical to hard code. Features like face unlock and image scene detection are similar, and simply would not be possible to implement exclusively with a specific set of instructions.
Most of the more impressive AI applications today are operated in the cloud. A digital assistant wouldn’t know how to respond to something as simple as a request for the time without an internet connection.
That’s not the case for on-device applications. An internet connection becomes unnecessary when all of the processing happens on the device’s hardware. There’s no need to send data like a voice clip to a giant corporation’s servers for analysis.
Qualcomm would like to move AI-related tasks from the cloud to individual devices
That brings us to Qualcomm’s vision. Qualcomm made it clear it wants to gradually move AI-related tasks from the cloud to individual devices. This would ultimately mean expanding on-device AI (aka edge AI) to include features like voice recognition and enhanced language translation.
There are certainly some benefits of on-device AI. Qualcomm cites increased reliability and greater privacy as the two main reasons for its push of on-device AI.
The argument for increased reliability makes a lot of sense given today’s mediocre infrastructure, especially in the United States. The relatively low population density of the U.S. makes it harder for carriers to make a return on their network investments in rural areas.
As we become more dependent upon cloud AI, internet outages and cell service dead zones will become even more annoying. Therefore, it makes sense to shift core AI functionality from the cloud to your device.
Don’t miss: What is 5G, and what can we expect from it?
On the other hand, Qualcomm has promised 5G will increase reliability and decrease latency in the near future. If 5G lives up to the hype, cloud AI reliability could end up being nearly as good as on-device AI. This would take many years to achieve, but it certainly calls Qualcomm’s apparent need for on-device reliability into question. Additionally, most applications won’t need these higher levels of reliability for years.
The need for greater privacy is a stronger argument. As mass data collection and government surveillance become even more alarming, people will likely become more reluctant to hand over sensitive information.
Consumers will undoubtedly appreciate the option to use AI features without needing to sacrifice so much of their privacy.
If you knew your personal data would stay on your device, you would probably be willing to divulge more information. On-device AI could then use this information to create more personalized experiences.
One of my favorite parts of Qualcomm’s presentation was an example of a personalized on-device AI. Qualcomm Senior Director of Engineering Jilei Hou talked about expanding the idea of capturing photos to capturing memories.
A simple photo of a sunset could be attached to data collected when it was taken. Hou’s example has you take the photo while walking on a beach in La Jolla with your son after a party. Not only could these details be used by on-device AI to more easily search through images, but they could also be used as a way of augmenting someone’s memory. It’s possible your phone may even store the details of conversations.
This level of data collection quite frankly feels uncomfortable today
This level of data collection feels uncomfortable today, but if it’s all stored on your device instead of the cloud it gets a little easier to stomach.
Other potential benefits
Qualcomm did not explicitly identify all of the potential benefits of on-device AI. Some other benefits may expedite a shift from cloud AI to on-device AI.
Most of the cloud AI features we enjoy today are free, despite the significant cost to offer cloud services at scale. Companies like Google are comfortable with losing some money in this area if it means collecting user data to improve or later monetize their services.
Data sampling will replace mass data collection
Eventually the need for a greater quantity of data will be replaced with a need only for unusual or edge case data. Google won’t need to record voices speaking in a popular dialect to improve speech recognition — it’ll only want uncommon dialects. Data sampling will replace mass data collection once the prioritized popular use cases are covered.
You will be responsible for the costs associated with on-device AI, but these costs won’t be much different than they are now. You will have to buy your own device, pay to service it as needed, and pay for the electricity it consumes.
On-device AI cannot replace cloud AI. Think of it as a platform for core functions. The cloud will still handle unusual requests, and the data gathered from these requests can then be used to improve the core functions. Software updates to on-device AI will become what classes are to students.
The challenges of on-device AI
Qualcomm’s vision of on-device AI is admirable, but it faces some serious challenges ahead.
The main purpose of cloud AI is to hand off computationally intensive tasks to hardware better equipped to handle them. You wouldn’t ask a toddler to prepare your work presentation, just like you wouldn’t ask an on-device AI to identify the objects in an image. You would instead rely on the greater amount of resources of the cloud.
Just as a toddler needs time to mature, so do these processors
The point is not that on-device AI won’t improve. Companies like Qualcomm will surely transform the AI-related abilities of their processors. However, just as a toddler needs time to mature, so do these processors.
More powerful hardware will need to combine with more efficient software algorithms. Qualcomm will need major breakthroughs in both of these areas to realize its vision.
There is another issue that even Qualcomm has acknowledged. Assuming it manages to pull off some amazing engineering, it still has to work within tight power constraints. An increase in the size of a neural network comes with an exponential increase in its power demands.
Solving the issues we face today will require a greater understanding of artificial intelligence. Even some of the best neural networks have unresolved quirks.
Related: What being an “AI first” company means for Google
In order to tackle these great challenges, we need to conduct more research. Qualcomm has already announced relevant efforts like its recently revealed AI Research Organization. Hopefully these efforts will pay off soon.
We’re still in the early stages of seeing AI tasks migrate from the cloud to individual devices. Qualcomm’s vision may be idealistic at this stage, but it’s certainly making some progress.
The Snapdragon 700 series and Snapdragon 710, in particular, were developed with that vision in mind. Qualcomm’s ability to double AI performance within the same tier in just a year is very promising.
As a consumer, you should expect a trickle of on-device AI features rather than a flood. We’ll be able to enjoy all the benefits of on-device AI Qualcomm is promising us, but it’s going to take patience.
Powered by WPeMatico