You are currently viewing Apple Intelligence is here!

Apple Intelligence is here!

Apple is finally riding the AI wave with the release of Apple Intelligence in iOS 18.1 Beta, iPadOS 18.1 Beta, and macOS Sequoia 15.1 Beta, albeit with a big asterisk on its head. The beta version of Apple Intelligence is limited to some countries and only has a few basic features, with more to come in the coming months.

Nevertheless, it is a big day for Apple and the tech industry, considering Apple’s first attempt at mainstream artificial intelligence products and its reputation for doing things, although later, better.

Apple Intelligence offers a suite of features that covers most of the AI tools we have been using for the last couple of years.

On iPhones, iPads, and Macs, users can now use various writing tools to proofread or rewrite text. They can even choose the tone of the rewritten text—friendly, professional, or concise. ApIn, for lack of a better abbreviation, also allows users to select text and summarize or make key points out of it. Because of its advanced capabilities in understanding context, it can even create tables from text.

These features work across all writing applications, third-party apps, and browsed content. The user needs only select the text on a webpage or document and access the writing tools from the select-copy-paste menu.

ApIn also offers the same set of writing features for emails. With one click, the ApIn allows you to summarize the entire email and give you the gist from the Mail app.

Now, powered by Apple Intelligence, Siri also got a revamp. The new Siri UI wraps around the edge of the entire screen instead of a big bubble in the lower-middle portion, which is used to restrict the view of the content on the screen. This new thin-edged appearance was made specifically to ensure the content on the screen is not restricted like it was with the previous bubble pop-up design. This design choice was made so that in the coming versions, Siri can have more onscreen awareness, allowing it to draw additional context and information from the text or image on the screen.

Another major update with the new beta versions is the phone app’s call recording feature and the notes app’s audio transcript feature. Finally, iPhone users can now record phone calls.

While on a call, there will be a call record option in the top left corner. Clicking on it will announce the recording to the other person and record the conversation, which can later be transcripted into plain text.

These are just some of the features available with the beta versions of iOS 18.1, iPadOS 18.1 Beta, and macOS Sequoia 15.1 Beta in select regions. However, Apple Intelligence promises more—a lot more.

When Apple Intelligence is released to the public in the stable iOS 18.1 and corresponding iPadOS and macOS versions sometime this October, it will have a whole suite of generative AI tools across devices and apps.

Along with the writing tools, the email app will have Smart Reply to help draft your response. The Priority messages and notifications feature will bump up the time-sensitive emails to the top of the notification tray and show a summarized snippet of the email itself. So users would be able to read the summary of the email right from the notification tray. Besides emails, Priority notifications will also sort app and other notifications on the devices, bumping important and time-sensitive ones to the top.

Apple Intelligence, in its full version, will also introduce Image Playground, Image Wand, and Genmoji, which offer various new ways to express yourself visually.

Using Image Playground, ApIn will allow users to create images from descriptions in any style while matching them with the people on their photos, Message thread, Freeform board, or Keynote slide. Users would be able to create original images, animations, and illustrations in different art styles and share them within the apps on the device or with others via social media and apps.

However, visual content generation goes beyond the text-to-image model we are used to. Thanks to Apple’s new Image Wand feature, apps like Notes or Freeform can now understand sketches. Simply sketch out anything you want and circle it; Image Wand will analyze your sketch and create an image of it from the rough sketch you made, that too in the style of the app and elements in your apps or in any art style you desire.

Image Wand can even fill up space with contextual visual elements. Simply circle out a space in Notes or Freeform board and ApIn will produce images from the text and other elements on the page or screen.

Another new visual tool is Genmoji. Like Animoji, Genmoji is a master of depiction. But while Animoji was only limited to depicting and animating your facial expression, Genmoji can depict even your friends and family’s appearance by taking information from the images in the Photos app. That means users can now send their friends customized emojis that look like them or the user himself.

Apple Intelligence will also allow your device to understand the elements and files better. With the new ApIn, users can search for a specific photo or video by describing it to the built-in advanced search functionality.

Let’s say you need a photo of you and your friends playing football on a beach, but you have thousands of photos from that weekend in Cox’s Bazar with friends. By simply typing “me and my friends playging football in a beach”, you will have your desired image from the Photos app put in front of you. It can even pick a specific moment in a video clip from your search request.

Similarly, users could create custom movies from the images and videos in the Photos app by describing what they want. Let’s say you want a small recap of what you did in Cox’s Bazar that weekend. Simply write your description in the Photos app, and Apple Intelligence will complete it. It will craft a storyline with unique chapters based on the themes it will identify and arrange your photos and videos into a recap movie with its narrative arc.

Also, similar to Pixel phones, Apple Intelligence in Apple devices will be able to clear up objects and remove distractions from backgrounds using the Clean Up tool in the Photos app.

But the biggest feature bump will come to Siri, Apple’s voice assistant. With Apple Intelligence, Siri will soon become an assistant-like tool that does tasks instead of just a voice assistant that answers questions. It would be able to do tasks that previously required a human touch.

With its new design, richer language understanding, and ability to comprehend personal context, conversations with Siri will be more natural and productive. Equipped with screen awareness and individual context, Siri will be able to schedule messages and take actions across various apps on Apple devices.

Thanks to its better understanding of context and natural language, it will be better equipped to handle user fumbles and speech discrepancies. It will also remember what the user does in different apps to assist the user better.

Let’s say you just added a calendar event for your cousin’s wedding in Sylhet and want to know what the weather will be like there. You can ask Siri, “What will the weather be like?” You won’t need to specify the city or the time and date. Siri will remember it from when you added the event and give you customized answers for it.

Its ability to understand onscreen context will help users eliminate simple yet boring tasks like adding contacts and events, taking notes, etc. Let’s say you just saw on a Facebook post that your favorite band is touring your city next month. You can simply ask Siri, and it will read what’s on the screen and create an event with the time, date, location, and relevant details from the Facebook post.

Siri will also remember useful information, your requests, and relevant information from other apps. Let’s say you wrote an email a couple of days ago to your friend Jabed but didn’t send it. You could just tell Siri, “Send the email I drafted to Jabed this Monday,” and Siri will know which email you were talking about, who you are asking to send, and what app to use.

To make things even better, Siri and the writing tools will be assisted by an OpenAI integration with ChatGPT, which means you will have ChatGPT’s writing capabilities and advanced photo and document recognition built into it.

These will come within a privacy-protected model where most of the processing is done on the device without needing to collect or send personal data elsewhere. For the parts needing remote-server-based computation, Apple will use their groundbreaking Private Cloud Compute, where ApIn will draw on large Apple Silicone-run server-based models for complex requests.

However, these features and Apple Intelligence will not be available to all devices. Even though iOS 18.1 will eventually be available in series 14, 13, and earlier, the only devices that can use Apple Intelligence are iPhone 15 Pro and later. Besides, most of these features are currently unavailable in this beta version. So, Apple users must wait and upgrade their devices to take advantage of Apple Intelligence.

Leave a Reply