I have never liked Apple and lately even less. F… US monopolies

  • @deranger@sh.itjust.works
    link
    fedilink
    8
    edit-2
    3 months ago

    It’s not data harvesting if it works as claimed. The data is sent encrypted and not decrypted by the remote system performing the analysis.

    From the link:

    Put simply: You take a photo; your Mac or iThing locally outlines what it thinks is a landmark or place of interest in the snap; it homomorphically encrypts a representation of that portion of the image in a way that can be analyzed without being decrypted; it sends the encrypted data to a remote server to do that analysis, so that the landmark can be identified from a big database of places; and it receives the suggested location again in encrypted form that it alone can decipher.

    If it all works as claimed, and there are no side-channels or other leaks, Apple can’t see what’s in your photos, neither the image data nor the looked-up label.

    • @ExtremeDullard@lemmy.sdf.org
      link
      fedilink
      33
      edit-2
      3 months ago

      It’s not data harvesting if it works as claimed. The data is sent encrypted and not decrypted by the remote system performing the analysis.

      What if I don’t want Apple looking at my photos in any way, shape or form?’

      I don’t want Apple exflitrating my photos.
      I don’t want Apple planting their robotic minion on my device to process my photos.
      I don’t want my OS doing stuff I didn’t tell it to do. Apple has no business analyzing any of my data.

      • AwkwardLookMonkeyPuppet
        link
        fedilink
        English
        13 months ago

        What if I don’t want Apple looking at my photos in any way, shape or form?’

        Then you don’t buy an iPhone. Didn’t they say a year or two ago that they’re going to scan every single picture using on-board processing to look for images and videos that could be child porn and anything suspicious would be flagged and sent to human review?

        • @Petter1@lemm.ee
          link
          fedilink
          23 months ago

          Well, the other cloud services just did server side csam scan long before apple and they do it respecting your privacy less than apple.

          Apple wanted to improve the process like EU wants it, so that no illegal data can be uploaded to apple’s servers making them responsible. That is why they wanted to scan on devices.

          But any person who ever used spotlight in the last 4 years should have recognised how they find pictures with words. This is nothing new, apple photos is analysing photos with AI since a very long time.

      • @LandedGentry@lemmy.zip
        link
        fedilink
        English
        143 months ago

        Yeah I was gonna say… I’ll defend Apple sometimes but ultimately this should only be opt-in and they are wrong for not doing that. Full stop.

      • @ganymede@lemmy.ml
        link
        fedilink
        7
        edit-2
        3 months ago

        TLDR edit: I’m supporting the above comment - ie. i do not support apple’s actions in this case.


        It’s definitely good for people to learn a bit about homomorphic computing, and let’s give some credit to apple for investing in this area of technology.

        That said:

        1. Encryption in the majority of cases doesn’t actually buy absolute privacy or security, it buys time - see NIST’s criteria of ≥30 years for AES. It will almost certainly be crackable <oneday> either by weakening or other advances… How many people are truly able to give genuine informed consent in that context?

        2. Encrypting something doesn’t always work out as planned, see example:

        “DON’T WORRY BRO, ITS TOTALLY SAFE, IT’S ENCRYPTED!!”

        Source

        Yes Apple is surely capable enough to avoid simple, documented, mistakes such as above, but it’s also quite likely some mistake will be made. And we note, apple are also extremely likely capable of engineering leaks and concealing it or making it appear accidental (or even if truly accidental, leveraging it later on).

        Whether they’d take the risk, whether their (un)official internal policy would support or reject that is ofc for the realm of speculation.

        That they’d have the technical capability to do so isn’t at all unlikely. Same goes for a capable entity with access to apple infrastructure.

        1. The fact they’ve chosen to act questionably regarding user’s ability to meaningfully consent, or even consent at all(!), suggests there may be some issues with assuming good faith on their part.
        • @ExtremeDullard@lemmy.sdf.org
          link
          fedilink
          5
          edit-2
          3 months ago

          How hard is it to grasp that I don’t want Apple doing anything in my cellphone I didn’t explicitely consent to?

          I don’t care what technology they develop, or whether they’re capable of applying it correctly: the point is, I don’t want it on my phone in the first place, anymore than I want them to setup camp in my living room to take notes on what I’m doing in my house.

          My phone, my property, and Apple - or anybody else - is not welcome on my property.

          • @ganymede@lemmy.ml
            link
            fedilink
            6
            edit-2
            3 months ago

            Sorry for my poor phrasing, perhaps re-read my post? i’m entirely supporting your argument. Perhaps your main point aligns most with my #3? It could be argued they’ve already begun from a position of probable bad faith by taking this data from users in the first place.

    • Ebby
      link
      fedilink
      9
      edit-2
      3 months ago

      Wait, what?

      So you take a pic, it’s analysed, the analysis is encrypted, encrypted data is sent to a server that can deconstruct encrypted data to match known elements in a database, and return a result, encrypted, back to you?

      Doesn’t this sort of bypass the whole point of encryption in the first place?

      Edit: Wow! Thanks everyone for the responses. I’ve found a new rabbit hole to explore!

      • @utopiah@lemmy.ml
        link
        fedilink
        123 months ago

        So homomorphic encryption means the server can compute on the data without actually knowing what’s in it. It’s counter-intuitive but better not think about it as encryption/decryption/encryption precisely because the data is NOT decrypted on the server. It’s sent there, computed on, then a result is sent back.

        • @someacnt@sh.itjust.works
          link
          fedilink
          English
          23 months ago

          It might still be possible to compare ciphertexts and extract information from there, right? Welp I am not sure if the whole scheme is secure against related attacks.

          • @utopiah@lemmy.ml
            link
            fedilink
            2
            edit-2
            3 months ago

            extract information

            I don’t think so, at least assuming the scheme isn’t actually broken… but then arguably that would also have far reaching consequence for encryption more broadly, depending on what scheme the implementation would be relying on.

            The whole point is precisely that one can compute without “leaks”.

            Edit: they are relying on Brakerski-Fan-Vercauteren (BFV) HE scheme, cf https://machinelearning.apple.com/research/homomorphic-encryption

      • @deranger@sh.itjust.works
        link
        fedilink
        3
        edit-2
        3 months ago

        I’m not pretending to understand how homomorphic encryption works or how it fits into this system, but here’s something from the article.

        With some server optimization metadata and the help of Apple’s private nearest neighbor search (PNNS), the relevant Apple server shard receives a homomorphically-encrypted embedding from the device, and performs the aforementioned encrypted computations on that data to find a landmark match from a database and return the result to the client device without providing identifying information to Apple nor its OHTTP partner Cloudflare.

        There’s a more technical write up here. It appears the final match is happening on device, not on the server.

        The client decrypts the reply to its PNNS query, which may contain multiple candidate landmarks. A specialized, lightweight on-device reranking model then predicts the best candidate by using high-level multimodal feature descriptors, including visual similarity scores; locally stored geo-signals; popularity; and index coverage of landmarks (to debias candidate overweighting). When the model has identified the match, the photo’s local metadata is updated with the landmark label, and the user can easily find the photo when searching their device for the landmark’s name.

        • @rtxn@lemmy.world
          link
          fedilink
          English
          -63 months ago

          by using high-level multimodal feature descriptors, including visual similarity scores; locally stored geo-signals; popularity; and index coverage of landmarks (to debias candidate overweighting)

          …and other sciencey-sounding technobabble that would make Geordi LaForge blush. Better reverse the polarity before the dilithium crystals fall out of alignment!

            • @rtxn@lemmy.world
              link
              fedilink
              English
              -1
              edit-2
              3 months ago

              That’s the point. It’s a list of words that may or may not mean something and I can’t make an assessment on whether or not it’s bullshit. It’s coming from Apple, though, and it’s about privacy, which is not good for credibility.

              • @datavoid@lemmy.ml
                link
                fedilink
                English
                63 months ago

                I don’t know what a geo-signal is, but everything else listed there makes perfect sense given the context.

        • @31337@sh.itjust.works
          link
          fedilink
          4
          edit-2
          3 months ago

          That’s really cool (not the auto opt-in thing). If I understand correctly, that system looks like it offers pretty strong theoretical privacy guarantees (assuming their closed-source client software works as they say, with sending fake queries and all that for differential privacy). If the backend doesn’t work like they say, they could infer what landmark is in an image when finding the approximate minimum distance to embeddings in their DB, but with the fake queries they can’t be sure which one is real. They can’t see the actual image either way as long as the “128-bit post-quantum” encryption algorithm doesn’t have any vulnerabilies (and the closed source software works as described).

      • @BorgDrone@lemmy.one
        link
        fedilink
        123 months ago

        Doesn’t this sort of bypass the whole point of encryption in the first place?

        No, homomorphic encryption allows a 3rd party to perform operations on encrypted data without decrypting it. The resulting answer is in encrypted form and can only be decrypted by whoever has the key.

        Extremely oversimplified example:

        Say you have a service that converts dollar amounts to euros using the latest exchange rate. You send the amount in dollars, it multiplies by the exchange rate and then returns the euro amount.

        Now, let’s assume the clients of this service do not want to disclose the amounts they are converting. What they could do is pick a large random number and multiply the amount by this number. The conversion service multiplies this by the exchange rate and returns the ridiculously large number back. Then you divide thet number by the random number you picked and you have converted dollars to euros without the service ever knowing the actual amount.

        Of course the reality is much more complicated than that but the idea is the same: you can perform operations on data in its encrypted form and now know what the data is nor the decrypted result of the operation.