“AI is evolving faster and will change the way we work much faster,” Garg said. “As a result, we need to start adopting AI a little bit every day.”
Earlier this week, Apple announced plans to integrate generative AI technology into its devices and apps. Called Apple Intelligence, the plan marks the company’s first major foray into the rapidly evolving field of AI, where it has lagged behind Microsoft and Google. Apple has also partnered with OpenAI to provide its AI chatbot, ChatGPT, for its new suite of writing tools and Siri.
The technologies are designed to optimize and streamline the user experience. Apple Intelligence can summarize emails before you open them, sort notifications by importance, transcribe voice recordings, proofread writing, suggest edits and even let users search for photos and videos using natural language.
Apple’s new venture is a step forward in what Garg calls “change management.”
“Once you embrace it and start using it, you’ll be able to learn how to use this technology more effectively in the future and essentially optimize the way you do your job,” said Garg, who served as a member of the U.S. Artificial Intelligence Safety Lab Consortium, which promotes the development and implementation of trustworthy AI. “That’s the direction we’re heading.”
AI is quickly becoming one of the most impactful human innovations of our time. Its application across industries, from healthcare to manufacturing to education, is enabling greater productivity, lower costs, and greater innovation.
But only if the technology is used for good, Garg said.
AI is not without its drawbacks. There are concerns that the rapidly developing technology could replace human jobs, spread misinformation by manipulating and generating fake content, and disrupt student learning by providing a platform for automating essay and assignment responses. There are also privacy and security concerns as AI systems process large amounts of user data.
Last year, hundreds of leaders in the field of artificial intelligence signed an open letter warning that the technology could one day pose a threat to humanity. In a one-sentence statement, the group said mitigating the risks of this technology should be a global priority, alongside pandemics, nuclear war and other societal concerns.
Apple has put privacy and security at the center of discussions about AI integration: In a press release announcing its latest iOS upgrade, the word “privacy” appeared 18 times. The company said it doesn’t use user data to train its models.
Most Apple Intelligence features are performed locally, so sensitive data remains on the device. If a user requests a task that is too complex for the local AI model to perform, the device can pass the request to a more advanced AI model available on Apple’s cloud servers. Once the request is completed, the data is deleted from the cloud.
“We should think of AI as a partner that helps us do more and be more creative,” Garg says. “If we think of AI as both an assistant and a tutor, we can learn how to use it more effectively.”
Apple Intelligence will be available to users in a testing phase this fall.