AI & RoboticsNews

The best AI features Apple announced at WWDC 2023

Apple announced a host of new software features for its popular devices — computers, iPhone, iPad, Apple Watch, Apple TV, AirPods and the new Apple Vision Pro headset — at its worldwide developer conference WWDC 2023 on Monday. As expected from pre-event reports and rumors, many of the new features from the technology giant used artificial intelligence (AI), or “machine learning,” (ML) as Apple presenters were careful to say.

In keeping with Apple’s previously stated commitments to user privacy and security, these new AI features largely appear to avoid connecting and transferring user data to the cloud, and instead rely upon-device processing power — what Apple calls its “neural engine.”

Here’s a look at some of the most exciting features coming to Apple devices powered by AI.

The star of Apple’s event, as has often been in the company’s history, was the “one more thing” unveiled at the end: Apple Vision Pro. The new augmented reality headset resembles chunky ski goggles that the user wears over their eyes, allowing them to see graphics overlaid on their view of the real world.

Not due until early 2024, and at a startling starting cost of $3,499, the new headset that Apple calls its first “spatial computing” device, contains a long list of impressive features. These include support for many of Apple’s existing mobile apps, and even allows Mac computer interfaces to be moved into floating digital windows in mid-air.

One major innovation that Apple showed off on the Vision Pro depends heavily on ML known as Persona. This feature uses built-in cameras to scan a user’s face to quickly create a lifelike, interactive digital doppelganger. This way, when a user dons the device and joins a FaceTime call or other video conference, a digital twin appears in place of them in the clunky helmet, mapping their expressions and gestures in real time.

Apple said Persona is a “a digital representation” of the wearer “created using Apple’s most advanced ML techniques.”

As iPhone users know well, Apple’s current built-in autocorrect features for texting and typing can sometimes be wrong and unhelpful, suggesting words that are not anywhere close to what the user intended (“ducking” instead of…another word that rhymes but begins with “f”). That all changes with iOS 17, however, at least according to Apple.

The company’s latest annual major update to the iPhone’s operating system contains a new autocorrect that uses a “transformer model” — the same category of AI program that includes GPT-4 and Claude — specifically to improve autocorrect’s word prediction capabilities. This model runs on device, preserving the user’s privacy as they compose.

Autocorrect now also offers suggestions for entire sentences and presents its suggestions in-line, similar to the smart compose feature found in Google’s Gmail.

One of the seemingly most useful new features Apple showed off is the new “Live Voicemail” for the iPhone’s default Phone app. This feature kicks in when someone calls a recipient with an iPhone, can’t get ahold of them and begins to leave a voicemail. The Phone app then shows a text-based transcript of the in-progress voicemail on the recipient’s screen, word-by-word, as the caller speaks. Essentially, it is turning audio into text, live and on the fly. Apple said this feature was powered by its neural engine and “occurs entirely on device… this information is not shared with Apple.”

Apple’s existing dictation feature allows users to tap the tiny microphone icon on the default iPhone keyboard and begin speaking to turn words into written text, or try to. While the feature has a mixed success rate, Apple says iOS 17 includes a “new speech recognition model,” presumably using on-device ML, that will make dictation even more accurate.

Apple didn’t announce a new physical Apple TV box, but it did unveil a major new feature: FaceTime for Apple TV, which takes advantage of a user’s nearby iPhone or iPad (presuming they have one) and uses that as its incoming video camera while displaying other FaceTime call participants on a user’s TV.

Another new aspect of the FaceTime experience is a presentation mode. This allows users to present an app or their computer screen to others in a FaceTime call, while also displaying a live view of their own face or head and shoulders in front of it. One view shrinks the presenter’s face to a small circle that they can reposition around the presentation material, while the other places the presenter’s head and shoulders in front of their content, allowing them to gesture to it like they are a TV meteorologist pointing at a digital weather map.

Apple said the new presentation mode is powered by its neural engine.

Do you keep a journal? If not, or even if you already do, Apple thinks it has found a better way to help you “reflect and practice gratitude,” powered by “on-device ML.” The new Apple Journal app on iOS 17 automatically pulls in recent photos, workouts and other activities from a user’s phone and presents them as an unfinished digital journal entry, allowing users to edit the content and add text and new multimedia as they see fit.

Important for app developers, Apple is also releasing a new API Journaling Suggestions, which allows them to code their apps to appear as possible Journal content for users, as well. This could be especially valuable for fitness, travel and dining apps, but it remains to be seen which companies implement it and how elegantly they are able to do so.

Apple touted Personalized Volume, a feature for AirPods that “uses ML to understand environmental conditions and listening preferences over time” and automatically adjusts volume to what it thinks users want.

Apple’s previous on-device ML systems for iPhone and iPad allowed its default photo organization app Photos to identify different people based on their appearance. For example, want to see a photo of yourself, your child, your spouse? Pull up the iPhone Photos app, navigate to the “people and places” section, and you’ll see mini albums for each of them.

However useful and pleasant this feature was, it clearly left someone out: Our furry companions. Well, no more. At WWDC 2023, Apple announced that, thanks to an improved ML program, the photo recognition feature now works on cats and dogs, too.

Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More


Apple announced a host of new software features for its popular devices — computers, iPhone, iPad, Apple Watch, Apple TV, AirPods and the new Apple Vision Pro headset — at its worldwide developer conference WWDC 2023 on Monday. As expected from pre-event reports and rumors, many of the new features from the technology giant used artificial intelligence (AI), or “machine learning,” (ML) as Apple presenters were careful to say.

In keeping with Apple’s previously stated commitments to user privacy and security, these new AI features largely appear to avoid connecting and transferring user data to the cloud, and instead rely upon-device processing power — what Apple calls its “neural engine.”

Here’s a look at some of the most exciting features coming to Apple devices powered by AI.

Persona for Vision Pro

The star of Apple’s event, as has often been in the company’s history, was the “one more thing” unveiled at the end: Apple Vision Pro. The new augmented reality headset resembles chunky ski goggles that the user wears over their eyes, allowing them to see graphics overlaid on their view of the real world.

Event

Transform 2023

Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.

Register Now

Not due until early 2024, and at a startling starting cost of $3,499, the new headset that Apple calls its first “spatial computing” device, contains a long list of impressive features. These include support for many of Apple’s existing mobile apps, and even allows Mac computer interfaces to be moved into floating digital windows in mid-air.

One major innovation that Apple showed off on the Vision Pro depends heavily on ML known as Persona. This feature uses built-in cameras to scan a user’s face to quickly create a lifelike, interactive digital doppelganger. This way, when a user dons the device and joins a FaceTime call or other video conference, a digital twin appears in place of them in the clunky helmet, mapping their expressions and gestures in real time.

Apple said Persona is a “a digital representation” of the wearer “created using Apple’s most advanced ML techniques.”

A better “ducking” autocorrect

As iPhone users know well, Apple’s current built-in autocorrect features for texting and typing can sometimes be wrong and unhelpful, suggesting words that are not anywhere close to what the user intended (“ducking” instead of…another word that rhymes but begins with “f”). That all changes with iOS 17, however, at least according to Apple.

The company’s latest annual major update to the iPhone’s operating system contains a new autocorrect that uses a “transformer model” — the same category of AI program that includes GPT-4 and Claude — specifically to improve autocorrect’s word prediction capabilities. This model runs on device, preserving the user’s privacy as they compose.

Autocorrect now also offers suggestions for entire sentences and presents its suggestions in-line, similar to the smart compose feature found in Google’s Gmail.

Live Voicemail

One of the seemingly most useful new features Apple showed off is the new “Live Voicemail” for the iPhone’s default Phone app. This feature kicks in when someone calls a recipient with an iPhone, can’t get ahold of them and begins to leave a voicemail. The Phone app then shows a text-based transcript of the in-progress voicemail on the recipient’s screen, word-by-word, as the caller speaks. Essentially, it is turning audio into text, live and on the fly. Apple said this feature was powered by its neural engine and “occurs entirely on device… this information is not shared with Apple.”

Improved dictation

Apple’s existing dictation feature allows users to tap the tiny microphone icon on the default iPhone keyboard and begin speaking to turn words into written text, or try to. While the feature has a mixed success rate, Apple says iOS 17 includes a “new speech recognition model,” presumably using on-device ML, that will make dictation even more accurate.

FaceTime presenter mode

Apple didn’t announce a new physical Apple TV box, but it did unveil a major new feature: FaceTime for Apple TV, which takes advantage of a user’s nearby iPhone or iPad (presuming they have one) and uses that as its incoming video camera while displaying other FaceTime call participants on a user’s TV.

Another new aspect of the FaceTime experience is a presentation mode. This allows users to present an app or their computer screen to others in a FaceTime call, while also displaying a live view of their own face or head and shoulders in front of it. One view shrinks the presenter’s face to a small circle that they can reposition around the presentation material, while the other places the presenter’s head and shoulders in front of their content, allowing them to gesture to it like they are a TV meteorologist pointing at a digital weather map.

Apple said the new presentation mode is powered by its neural engine.

Journal for iPhone

Do you keep a journal? If not, or even if you already do, Apple thinks it has found a better way to help you “reflect and practice gratitude,” powered by “on-device ML.” The new Apple Journal app on iOS 17 automatically pulls in recent photos, workouts and other activities from a user’s phone and presents them as an unfinished digital journal entry, allowing users to edit the content and add text and new multimedia as they see fit.

Important for app developers, Apple is also releasing a new API Journaling Suggestions, which allows them to code their apps to appear as possible Journal content for users, as well. This could be especially valuable for fitness, travel and dining apps, but it remains to be seen which companies implement it and how elegantly they are able to do so.

Personalized Volume

Apple touted Personalized Volume, a feature for AirPods that “uses ML to understand environmental conditions and listening preferences over time” and automatically adjusts volume to what it thinks users want.

Photos can now identify your cats and dogs

Apple’s previous on-device ML systems for iPhone and iPad allowed its default photo organization app Photos to identify different people based on their appearance. For example, want to see a photo of yourself, your child, your spouse? Pull up the iPhone Photos app, navigate to the “people and places” section, and you’ll see mini albums for each of them.

However useful and pleasant this feature was, it clearly left someone out: Our furry companions. Well, no more. At WWDC 2023, Apple announced that, thanks to an improved ML program, the photo recognition feature now works on cats and dogs, too.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.


Author: Carl Franzen
Source: Venturebeat

Related posts
AI & RoboticsNews

Nvidia and DataStax just made generative AI smarter and leaner — here’s how

AI & RoboticsNews

OpenAI opens up its most powerful model, o1, to third-party developers

AI & RoboticsNews

UAE’s Falcon 3 challenges open-source leaders amid surging demand for small AI models

DefenseNews

Army, Navy conduct key hypersonic missile test

Sign up for our Newsletter and
stay informed!