snaps can only access what you grant them and only compromise the bits of the system you granted access to via an interface
Yes, and that is what made them appealing to me in the beginning, but for some kinds of software it’s just not realistic.
For example, JetBrains’ snaps all require full access to the machine.
In general, with a lot of software it’s hard to work in completely boxed mode. For example, it’s common to want to give the software access to the user directory and what’s under it.
And, even if it were possible and practical to always sufficiently sandbox all third-party software - it would still not entirely solve the problem because the information that is accessible to the software may also be sensitive, limited as it is. So if tomorrow a JetBrains IDE is compromised, even if I am able to completely revert to an uncompromised version, and even if the IDE did not have general computer access - my source code would have already been stolen. What’s worse, some people manage passwords and SSH keys in the IDE, and those would likely have been compromised as well.
you seem to make a lot of technical assumptions without having looked any deeper into the implementation (i.e. the build process of snaps, the way the gpg signature prevents you from manipulation of the package itself, the confinement and interface system at runtime, the various test and rollback features snaps bring by design etc)
I may not be an expert on the specifics of snap’s build system and runtime protections, but I do have background and understanding in these general topics, as well as in security.
I gave an example of a scenario that could lead to compromised security despite all the best practices and precautions that were taken by the Snap team - because the vulnerability is not in Snap itself. Snap only facilitates it by invoking third-party code automatically.
Your approach is also somewhat contradictory.
On one hand, you don’t want to let users disable automatic updates because you don’t trust them enough to decide what and when should be updated.
On the other hand, you trust the users to limit software’s access to their machine (when possible. Some software doesn’t even allow it, as I mentioned above).
Ultimately I don’t think you, or anyone else, can really argue with the technical merit of my claims. It is not impossible for a vendor to issue a compromised or malicious update, unintentionally or even intentionally (e.g. a rogue employee). So with all good intentions, precautions, and mechanisms, it boils down to trust. And in its current form, the Snap system forces me to trust my vendors to a degree that I am simply not comfortable with, and reality shows I’m not wrong to not blindly trust all vendors.