Apple’s upcoming iOS 27 will embed visual‑intelligence features directly in the Camera app as a new “Siri” mode and expand Photos with AI‑driven Extend, Enhance and Reframe tools, all built on the iOS 27 SDK and the latest Foundation Models.
iOS 27 adds AI‑powered modes to Camera and new editing tools to Photos

Apple’s next major OS release, iOS 27, is only weeks away, and the company has started to surface details about two of its most used apps. The Camera app will gain a dedicated Siri mode that brings visual‑intelligence directly into the shutter button, while Photos will ship three AI‑enhanced editing actions—Extend, Enhance and Reframe. Both sets of features rely on the iOS 27 SDK, which ships with Xcode 16 and requires devices running iPhone 12 mini or later.
What the new Camera mode does
In iOS 27 the Camera app will list a new mode alongside Photo, Video, Portrait and Pano. Selecting Siri activates Apple’s visual‑intelligence engine without leaving the camera UI. Historically, that engine was reachable only through a long‑press on the Camera Control widget or via a Control‑Center shortcut. By moving it into the native app, Apple reduces the number of taps needed to run AI‑based queries.
Core capabilities
| Capability | How it works | Developer impact |
|---|---|---|
| Nutritional label scan | The camera captures the label, runs on‑device OCR, then maps the parsed nutrients to HealthKit entries. | Apps that already write to HealthKit can use the new VNLabelScanner API introduced in the iOS 27 SDK to read the same data without reinventing the pipeline. |
| Business‑card scan | The image is sent to the on‑device model, which extracts name, phone, email and address, then offers to create a new CNContact. |
The CNContact framework now includes a ContactFromImage convenience initializer, simplifying integration for third‑party contact managers. |
Both functions run entirely on the device, preserving user privacy while still delivering the speed expected from native camera workflows.
Photos gets three AI editing actions
Mark Gurman’s leak confirmed three new tools that will appear in the edit drawer of the Photos app. They are built on the Apple Foundation Models stack, specifically a fork of Google Gemini that Apple has integrated into its on‑device inference engine.
Extend
Purpose: Generate content beyond the original frame.
Example: A close‑up of a historic monument can be expanded to show the surrounding plaza, letting users share a wider view without needing a panoramic shot.
Technical note: The new UIImageExtensionGenerator class in the iOS 27 SDK exposes a generateExtendedCanvas(_:size:) method. Developers can call it from SwiftUI or UIKit to add similar functionality to custom photo‑editing apps.
Enhance
Purpose: Automatic color, lighting and noise correction.
Example: A low‑light portrait is brightened, shadows are softened and skin tones are balanced with a single tap.
Technical note: The AIImageEnhancer service is now part of the Vision framework. It works with CIImage pipelines, allowing developers to chain it with existing Core Image filters.
Reframe
Purpose: Adjust perspective after the shot, especially for spatial photos captured with the LiDAR scanner.
Example: A car photographed from the front can be re‑oriented to showcase the side profile, useful for product listings.
Technical note: The SpatialReframeProcessor takes depth data from the ARKit session and produces a new viewpoint without requiring a new capture.
These tools sit next to the Cleanup feature introduced in iOS 18, which already lets users erase unwanted objects. Together they give users a full suite of AI‑assisted editing without leaving the Photos app.
Migration checklist for developers
If your app relies on camera or photo‑editing APIs, you’ll need to address a few changes before shipping on iOS 27:
- Update to Xcode 16 – the new SDK ships only with this version; older Xcode releases will not compile the new Vision and Contacts extensions.
- Add
NSCameraUsageDescriptionandNSHealthShareUsageDescription– the Siri mode may request HealthKit write access when logging nutrition data. - Test on‑device inference – Apple’s on‑device models run on the Neural Engine; verify performance on older devices (iPhone 12 mini) to avoid UI stalls.
- Adopt the new Swift concurrency APIs – many of the AI calls are now
asyncand returnResulttypes, so updating to Swift 5.9 is recommended. - Consider fallback UI – if a device does not support the required Neural Engine, the system will hide the new modes automatically. Provide graceful degradation in your UI.
Why this matters for cross‑platform teams
For teams that ship both iOS and Android versions of a camera‑heavy app, the iOS 27 changes illustrate a shift toward on‑device AI that mirrors what Google has been doing with Android 14’s Pixel Visual Core. The new VNLabelScanner and AIImageEnhancer APIs give iOS developers a high‑level entry point that reduces the need for third‑party ML libraries.
On Android, developers will likely need to integrate ML Kit or TensorFlow Lite equivalents to match the feature set, which means maintaining two separate code paths. Using a cross‑platform framework like Flutter or React Native will require native bridges for the new iOS‑only APIs, but the effort is offset by the ability to reuse UI logic across platforms.
What to watch for after launch
- Beta feedback – Apple’s public beta program will surface any edge‑case bugs with the OCR and depth‑based reframe pipelines.
- Model updates – Apple has hinted at incremental improvements to the Gemini‑based models via OTA updates, so keep an eye on release notes for performance tweaks.
- Privacy audits – Because the new modes touch HealthKit and Contacts, reviewers will scrutinize the permission flow. Ensure your privacy policy reflects the new data handling.
Stay tuned for the official iOS 27 release notes and the Xcode 16 WWDC session recordings for deeper implementation details.

Comments
Please log in or register to join the discussion