

Not for me. I vaguely remember that being a thing, but it’s not with the other features at least.
Not for me. I vaguely remember that being a thing, but it’s not with the other features at least.
Thanks bud
4 times once every few years is still way lower than using it every few weeks, once, as it works the first time.
Hmmm I couldn’t really find anything. The only way to guarantee that is to have models that run purely locally, but until very recently that wasn’t feasible.
Smaller AI models that could run on a phone are now doable, but making them useful requires a lot of dev time and only giant data-guzzling companies have tried so far.
Don’t you understand?! 1st amendment only applies to things that align with the Trump party agenda.
That’s what makes America the most free country of all.
2G is also gone. Edit: it’s not gone just yet. Not sure why the phone didn’t try to fall back to 2G.
https://www.ofcom.org.uk/phones-and-broadband/coverage-and-speeds/3g-switch-off/
The old phone was a couple years into 4G existing but before we started to send voice over it.
I assume it just wasn’t in the OS-level code. It only went up to Android 11. We could have tried LineageOS but that would have required a bunch of work including wiping the phone.
Either way, we checked and the option just wasn’t there.
We switched off 3G this year in the UK and my brothers phone stopped being able to make calls. He was using a 6 year old high-end Android phone, but it was from just before the cutoff where you could turn on VoLTE (calls over 4G).
Thankfully, I had a spare phone from the next year after that to hand him, and that one could work with some hidden menu (the type you type into your dialer) hacking.
For people who have not read the article:
Forbes states that there is no indication that this app can or will “phone home”.
Its stated use is for other apps to scan an image they have access to find out what kind of thing it is (known as "classification"). For example, to find out if the picture you’ve been sent is a dick-pick so the app can blur it.
My understanding is that, if this is implemented correctly (a big ‘if’) this can be completely safe.
Apps requesting classification could be limited to only classifying files that they already have access to. Remember that android has a concept of “scoped storage” nowadays that let you restrict folder access. If this is the case, well it’s no less safe than not having SafetyCore at all. It just saves you space as companies like Signal, WhatsApp etc. no longer need to train and ship their own machine learning models inside their apps, as it becomes a common library / API any app can use.
It could, of course, if implemented incorrectly, allow apps to snoop without asking for file access. I don’t know enough to say.
Besides, you think that Google isn’t already scanning for things like CSAM? It’s been confirmed to be done on platforms like Google Photos well before SafetyCore was introduced, though I’ve not seen anything about it being done on devices yet (correct me if I’m wrong).
Yeah this is emblematic of modern LLMs like Gemini and ChatGPT. Needlessly verbose, confidently wrong, less capable of actually doing things.