Apple has promised not to use the iCloud photo verification system for child pornography for surveillance

The company explained how the system will work and how it differs from competitors, but did not tell what will prevent you from abusing the features in the future.

Apple has released a six-page document with answers to frequently asked questions about checking child pornography on images in iCloud. The company further clarified the details after accusations of privacy violations by cybersecurity specialists, journalists and users. TJ publishes a brief extract of the document.
What is the difference between checking photos in Messages and iCloud
Checking the content in “Messages” and photos in iCloud are two different technologies that do not work together. Child protection in iMessage only works on family accounts with children 12 or younger. The data is not transmitted to the authorities, and the function must be activated manually.
The recognition of explicit images in “Messages” does not violate the terminal encryption in iMessage — the verification takes place directly on the device, where the messages come, and Apple does not receive any data in the process.
Parents will not be warned about receiving or sending every explicit picture: first, the system will warn the child that if he agrees to view the photo without blurring, the notification will be sent to the parent.
For children aged 13-17, the system will blur candid pictures and warn about the danger of the content, but it will not notify parents.
Will Apple scan all photos on iPhones for child porn and how does it work
Checking for child abuse only works for photos that the user has decided to upload to the iCloud cloud, if the user has disabled the upload — the system will not work. The function does not check the private photo library on the device.
Apple will only be able to find out about accounts that contain “a collection of already known images with child pornography” and only about images that match them.
The database of child abuse images is not stored on the device in plain text — Apple uses unreadable “hashes” that it will save on the device. They are lines with numbers that cannot be read or converted into the original images.
Apple used the image hash comparison system to avoid scanning all images in the cloud — this is what all other companies do. The company believes that this poses a threat to privacy, so Apple does not know about users ‘ photos until it records a match with the database.
The system does not check any content on the device, except for photos uploaded to iCloud. Apple will not receive any other data from the device.
Is it possible to use the system to recognize something other than child pornography
Apple claims that the system only works with a database of images provided by human rights organizations. The technology was not developed for another purpose.
The document says that the system does not automatically transfer data to the authorities: first, Apple will check the result of checking its technology itself.
Apple claims that the system was created only to notify about photos that are in the database of child porn already known to the authorities. In most countries, including the United States, such content is prohibited, and in the States Apple is obliged to report known cases to the authorities.
Apple has said it will refuse any requests from governments to adapt the system for other purposes. The technology was created only for a specific purpose — to track cases of child abuse.
Governments have previously asked the company to sacrifice user privacy, but it rejected all requests and will continue to do so in the future, Apple said. The database for comparison is formed not by Apple itself, but by human rights organizations — the company only uses it. The database contains only images with child pornography known to the authorities.
Apple announced the strengthening of child protection on August 6. The company has implemented functions in three areas — “Messages”, photos in iCloud and in Siri. Starting in the fall of 2021, the systems will begin to blur explicit pictures for children under 12 in iMessage, check photos in the iCloud cloud for child porn and give safety tips to children and parents in Siri.
After the announcement, the company was criticized by cybersecurity experts, journalists and ordinary users. They accused Apple of violating privacy and wondered how the same function could be used by authoritarian governments in the future.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *