iOS 11 was introduced on June 5, 2017 at WWDC by Apple CEO Tim Cook. He has mentioned it is one of the best and most advanced operating system. There are subtle design changes to interface elements throughout the operating system. Calculator and Phone have new look, Text is bolder. Lock screen and Control center have been entirely redesigned.
Control Center is customizable and there are options to include a wider range of settings. 3D Touch integration has expanded so you can do more without needing to open the Settings app and its no longer split across multiple screens.
In Siri they have done some mind blowing changes, it has become more intelligent than ever. It can also translate English into different languages like Spanish. Siri is deeply integrated with Apple Music. It has more of a natural voice and it can learn user preferences and syncs that information across multiple devices.
Notification is merged with Lock Screen. Going down the screen to access notifications now brings up the Lock Screen. There is no separate Notification center anymore and other things more or less are same.
Apple Music includes new feature where user can see what their friends are listening to and share your own music. Memories feature in Photos is smarter than ever and Camera app there are some new features in Potrait mode, Live photos. Now searchable handwriting is possible in Notes and document scanning, Maps gains lane guidance, indoor maps for malls and airports.
Airplay 2 includes multi-room functionality and Homekit now supports speakers. DND can be set while you are driving, muting notifications while vehicle is in motion. Videos and Photos also take up less space through the adoption of new HEIF and HEVC formats.
They are proud to introduce next-gen Core ML framework and ARKit framework to the developer world. Core ML framework equips developers to add amazing features used in Apple’s Siri, Camera and Quick type. While ARKit gives power to developers to take users to entirely new world with immersive new experiences.
Let’s have the look at this two appealing new frameworks.
1. Machine Learning :
One can easily build computer vision machine learning features into your app. Features like face tracking, face detection, landmarks, text detection, barcode detection, image registration etc.
There are various models available which are capable of doing following things.
Detects the scene of an image from 205 categories such as an airport terminal, bedroom, forest, coast, and more.
Detects the dominant objects present in an image from a set of 1000 categories such as trees, animals, food, vehicles, people, and more.
2. Augmented Reality
Visual Inertial Odometry (VIO)
ARKit uses Visual Inertial Odometry (VIO) which accurately track world around it. It uses Camera sensor data and CoreMotion data which allow the device to sense how it moves within a room with high degree of accuracy.
Scene Understanding and Lighting Estimation
ARKit can detect things like tables and floors and can track and place objects on smaller feature points as well. It also estimates total amount of light available in the scene and applies correct amount of lighting to virtual objects.
High Performance Hardware and Rendering Optimizations
With hi-tech hardware you can build detailed and compelling virtual content on top of real world scenes. One can take advantage of Metal, Scenekit and third party tools like Unity, Unreal engine.