![]() What if purchasers argued that by using TLS certificates they controlled they were "only trying to protect the security of their ecosystem", e.g., the collection of entities with whom they share data. ![]() Imagine further that Apple had no control over these connections, and that Apple was not allowed to "MiTM" the outbound connections made from Apple's computers to purchasers' computers to see what data/information was being sent. Imagine hypotethically that purchasers shared data with Apple but every time Apple used the data, Apple computers would automatically make outbound connections from Apple's computers, over Apple's internal local network, through Apple's firewalls, over the internet, to purchasers computers so the purchasers could "understand how Apple was using their data". ![]() The second reason is that if purchasers applied the same standard to Apple, it is unilkely Apple would agree to such surveillance of its own computers. The first reason is that Apple does not ask purchasers for permission to do this surveillance. It may be disingenuous to claim that Apple only seeks to know how purchasers of their computers are using them. The so-called "tech industry" has moved the needle and tried to normalise what is IMHO an entirely different scenario. Generally, firewalls were not used to block software pre-installed on the computer by Apple. I owned older Apple computers that never made such assumptions. There is no option provided to globally disable all phoning home to Apple, to indicate "No, thank you." And it also believes no computer purchaser ever has an interest in seeing what data is being collected, by monitoring the traffic, let alone an interest in preventing these connections. If anything, one would think the computer and network owner should be allowed to prevent any third party, including Apple, if they so choose, from initiating remote connections and sending data from the computer owner's computer.Įven after purchase Apple believes it is entitled to collect data from someone else's computer, over someone else's network. Who should be allowed to view it and control it. Under this model, it is as if the the computer and local network owner does not also own the traffic. The mere act of questioning this model is often attacked by "tech" workers commenting online. Instead it advocates, if not effectvely mandates, placing trust (and fees, i.e., for "domain names") in some other entity, e.g., Apple, other "Certificate Authorities", etc. ![]() Yet, in the "tech" company model of computer network use, the computer owner is discouraged, e.g., scary browser warnings, SSL errors, connection failures, etc., from placing any trust in themselves. That's not unreasonable in the slightest. I like to know what data applications are sending or trying to send. I monitor the traffic on the computers I own which means I have sometimes to decrypt traffic from applications and then re-encryot before sending from the loopback to the local network I own and then over the wire onto "the internet". This begs the question why is it not possible. "For the record I don't believe Apple is collecting that info - having said that I think the biggest issue with Apple is that it is not possible to fully audit and determine what they collect and what they don't." That's also why it seems, to me, like the more worrying vector for an authoritarian government to force abuse wasn't the higher resolution neural hashes matched against known images but the ML labeling of images for searchability. A tool that is good enough to be able to reliably detect things like dog breeds (as the author also points out) seems like it should reliably enough also do things like detect drug paraphernalia, pictures of tankman, banned religious symbols, or whatever thing that people were more worried about being oppressed by a potentially authoritarian government. I'm still not super sure it couldn't be misused, though. So VLU ≠ CSAM detection, not by a long way. Apple recognised that looking them up online wasn’t feasible, but that’s exactly what VLU does, because it works differently, with a much larger database and forming less robust and more inaccurate matches. Another big difference is that initial CSAM detection was intended to be performed locally, using a database of ‘positive’ neural hashes stored locally. But – as you can read in my detailed account of the proposed method – they fall far short of what would have been required to detect CSAM. They’re good for distinguishing specific images of paintings, breeds of dog and cat, and so on. The neural hashes used to look up images in VLU are ‘low resolution’. >It’s perhaps worth pointing out here that VLU is nothing like Apple’s proposals for CSAM recognition. In the comments on the main article, the author says:
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |