Close ad

Apple has long been profiled as a company that is committed to protecting user privacy. However, this approach is a complication in the development Apple Intelligence, specifically advanced artificial intelligence functions that are intended to improve, for example, Siri, summarizing emails or generating images. In order to Apple To be able to train its models effectively, it needs data (but at the same time it must not compromise user privacy).

On the machine learning blog now Apple described how it solves the problem. Instead of collecting real user data, it creates so-called synthetic data. For example, in the case of emails, the model generates several versions of a regular email (such as an invitation to a tennis match) in different lengths and styles. These versions are then used to create so-called "embeddings", i.e. numerical representations of the message content.

These embeds are sent to a limited number of iPhones from users who have the “Device Analysis” option turned on. These phones then compare the synthetic embeds to real emails stored on the device, without sending anything. Applu. The phone will evaluate which synthetic varianta is most similar to a real email, and the result is then sent back anonymously using differentiated privacy, a technique that adds random noise to the data and prevents the identification of a specific user.

Same approach Apple also used to improve image generation in Genmoji. Here it only monitors the most common patterns of entered queries. Again anonymously and without connection to specific users or Apple ID. Apple In addition, it emphasizes that only users who have voluntarily agreed to participate in these tests by enabling the sharing of analytical data. If the user does not want this option, they can disable it at any time in the settings. In future updates iOS 18.5, iPadOS 18.5 and macOS Sequoia 15.5 plans Apple This approach can be extended to other functions such as creating memories, generating images, or intelligent writing.

Today's most read

.