Developer beta testers are finally able to try out Apple Intelligence on their iPhone and other devices, as part of the first betas of iOS 18.1, iPadOS 18.1, and macOS Sequoia 15.1.
Apple’s implementation is much more privacy focused than others, so at least that’s not a concern here as compared to Android / Windows counter parts. However, if you really don’t want anything to do with it, then don’t upgrade to the new iOS/macOS; that’s probably your best bet if you want to stay behind.
Having said that, like it or not, this is the direction society is headed. Opting to be left behind is probably going to be detrimental in the long run. Your next device (as long as you remain anything remotely main stream) will have the newer OS installed, and have these features enabled by default. So may as well start embracing it now in the private Apple ecosystem, instead of throwing it all away at whatever other competitors’ ecosystem.
Yeah. I work in the ransomware response sector of IT Security. Frankly, I don’t need nor want this bloat on my devices. Hopefully I can find ways to remove it.
Where you work and what you do hardly matters in this case — unless you choose to send your request to ChatGPT (or whatever future model that gets included in the same model), everything happens on device or in the temporary private compute instance that’s discarded after your request is done. The on device piece only takes Neural Engine resources when you invoke and use it, so the only “bloat” so to speak is disk space; which it wouldn’t surprise me if the models are only pulled from the cloud to your device when you enable them, just like Siri voices in different languages.
Apple’s implementation is much more privacy focused than others, so at least that’s not a concern here as compared to Android / Windows counter parts. However, if you really don’t want anything to do with it, then don’t upgrade to the new iOS/macOS; that’s probably your best bet if you want to stay behind.
Having said that, like it or not, this is the direction society is headed. Opting to be left behind is probably going to be detrimental in the long run. Your next device (as long as you remain anything remotely main stream) will have the newer OS installed, and have these features enabled by default. So may as well start embracing it now in the private Apple ecosystem, instead of throwing it all away at whatever other competitors’ ecosystem.
Yeah. I work in the ransomware response sector of IT Security. Frankly, I don’t need nor want this bloat on my devices. Hopefully I can find ways to remove it.
Ya, you have IT social skills I see.
Where you work and what you do hardly matters in this case — unless you choose to send your request to ChatGPT (or whatever future model that gets included in the same model), everything happens on device or in the temporary private compute instance that’s discarded after your request is done. The on device piece only takes Neural Engine resources when you invoke and use it, so the only “bloat” so to speak is disk space; which it wouldn’t surprise me if the models are only pulled from the cloud to your device when you enable them, just like Siri voices in different languages.