Possible lesson for Snap's forced-auto-updates policy following Solarwinds' malicious update security breach

So, we recently got these news: https://www.theguardian.com/technology/2020/dec/16/solarwinds-orion-hack-scrutiny-technology

… hackers snuck a malicious code that gave them remote access to customers’ networks into an update of Orion

This is just the latest example of distributing malware to millions by manipulating an update.

Even switching to the stable channel may not help in such a scenario because if the vendor’s system is compromised then the attacker may be able to push their updates onto stable as well.

Bottom line: allow me to set up the system so it will NOT INVOKE NEW CODE ON MY COMPUTER WITHOUT MY EXPLICIT KNOWLEDGE AND CONSENT.

where exactly in the snap build infrastructure would that manipulation happen, can you give an example of a part of the pipeline that would allow this (beyond the owner blindly pulling it into the upstream code) ?

Well, the channel would indeed not help you, but

snap revert compromised-snap

is exactly designed for this, it would simply go back to the uncompromised binary … (and only move on with a manual “snap refresh”)

where exactly in the snap build infrastructure would that manipulation happen

I don’t have any experience as a snap vendor, and I don’t know what the procedure is, but if a vendor’s system is infiltrated then supposedly anything that the vendor could do - the attack could do as well. I suppose it could be done by manipulating the build pipeline at the vendor’s side, or possibly by pushing it upstream on the vendor’s behalf.

Sorry, this remark doesn’t make any sense to me.
Once malicious code is deployed - the machine is compromised indefinitely. The only way to revert it back to safety is to re-format and re-install everything from trusted sources. snap revert may be good for reverting broken updates, but it’s a very bad advice to use it to handle a MALICIOUS update.

how exactly ?

snaps can only access what you grant them and only compromise the bits of the system you granted access to via an interface … the software being compromised in itself can not compromise the host by design, its is always a matter of permissions you particulary grant to it …

you seem to make a lot of technical assumptions without having looked any deeper into the implementation (i.e. the build process of snaps, the way the gpg signature prevents you from manipulation of the package itself, the confinement and interface system at runtime, the various test and rollback features snaps bring by design etc)

snaps can only access what you grant them and only compromise the bits of the system you granted access to via an interface

Yes, and that is what made them appealing to me in the beginning, but for some kinds of software it’s just not realistic.

For example, JetBrains’ snaps all require full access to the machine.

In general, with a lot of software it’s hard to work in completely boxed mode. For example, it’s common to want to give the software access to the user directory and what’s under it.

And, even if it were possible and practical to always sufficiently sandbox all third-party software - it would still not entirely solve the problem because the information that is accessible to the software may also be sensitive, limited as it is. So if tomorrow a JetBrains IDE is compromised, even if I am able to completely revert to an uncompromised version, and even if the IDE did not have general computer access - my source code would have already been stolen. What’s worse, some people manage passwords and SSH keys in the IDE, and those would likely have been compromised as well.

you seem to make a lot of technical assumptions without having looked any deeper into the implementation (i.e. the build process of snaps, the way the gpg signature prevents you from manipulation of the package itself, the confinement and interface system at runtime, the various test and rollback features snaps bring by design etc)

I may not be an expert on the specifics of snap’s build system and runtime protections, but I do have background and understanding in these general topics, as well as in security.

I gave an example of a scenario that could lead to compromised security despite all the best practices and precautions that were taken by the Snap team - because the vulnerability is not in Snap itself. Snap only facilitates it by invoking third-party code automatically.

Your approach is also somewhat contradictory.
On one hand, you don’t want to let users disable automatic updates because you don’t trust them enough to decide what and when should be updated.
On the other hand, you trust the users to limit software’s access to their machine (when possible. Some software doesn’t even allow it, as I mentioned above).

Ultimately I don’t think you, or anyone else, can really argue with the technical merit of my claims. It is not impossible for a vendor to issue a compromised or malicious update, unintentionally or even intentionally (e.g. a rogue employee). So with all good intentions, precautions, and mechanisms, it boils down to trust. And in its current form, the Snap system forces me to trust my vendors to a degree that I am simply not comfortable with, and reality shows I’m not wrong to not blindly trust all vendors.

$ snap install kotlin
error: This revision of snap "kotlin" was published using classic confinement and thus may perform
       arbitrary system changes outside of the security sandbox that snaps are usually confined to,
       which may put your system at risk.

       If you understand and want to proceed repeat the command including --classic.

classic snaps (like the jetbrains one above) require consent from the user for exactly this reason, they also require a security review and vetting of the publisher in advance to uploading, i admit, they are not particulary more secure than a deb or a tarball extracted to /opt (beyond the fact that you still have all snap rollback/update/snapshot/you-name-it features available beyond the missing sandboxing) … note that classic snaps are rare and have very high hurdles to overcome for a publisher to be accepted though.

the point is that you can always disconnect the home interface and simply work via ~/snap/<snapname>/current to avoid any access to user data … beyond this the home interface does only grant access to visible files (“dot directories” are not accessible through it, so nothing will have access to ssh keys or passwords stored in any hidden configuration dirs)

well, it is not the job of the snapd team, while they can make the defaults as secure as possible it is still your responsibility as an admin to actually deny or grant access via the interface management to specific bits of the host.

we want users to eventually get this security update, yes.

you can always delay updates to a convenient time within the next 60 days, it is not like this has not been configurable from day one on …

Vetting a vendor is good, but not sufficient. SolarWinds would have probably passed your vetting, as it has passed vetting by half the US government and Fortune 500 companies who’s been using it.

Limiting work to ~/snap/<snapname>/current is good and I consider it a best practice, but it’s not practical for every type of software, and I had difficulties to effectively use a couple of software packages this way, and eventually just gave up.

it is still your responsibility as an admin to actually deny or grant access via the interface management to specific bits of the host

I agree.
I also consider it my responsibility (and my right) to actually deny or grant running new code on my machine, but you refuse to allow this.

you can always delay updates to a convenient time within the next 60 days

It took nine months to discover that the SolarWinds update from March was compromised. Your decision to limit postponing of updates to 60 days (i.e. and not indefinitely) is: [A] arbitrary, [B] insufficient (as reality shows), and [C] not your call to make on the (advanced) user’s behalf.

So, if auto-updates could be disabled more easily, how would this hypothetical snap scenario played out differently? Would you have waited nine months without updating?

What about the majority of cases where updates close security holes, rather than open them? Would you have avoided updating any of your installed snaps for nine months? How many other security holes would remain open on your system then, in the effort to mitigate this rare case?

Whether or not you update, you “could” be vulnerable. But in practice, updating promptly generally keeps you safer. Using this (non-snap!) incident to claim the opposite makes no sense.

To be clear, I’m avoiding taking a position on whether the current auto-update is a good idea, so that I can make the point that regardless of such a position, the argument you’re making here makes no sense.

1 Like

Would you have waited nine months without updating?

It depends on the software, on my needs, and on what has changed.
When new versions come out I review the materials the vendor publishes (and often the change log) and decide.
If there are security considerations then I’m usually more inclined to update, but even in such case, when detailed information is available - I check if the fixes actually apply to my use cases.
As I mentioned in the other thread, I do support forced updates of some specific types of applications (like browsers), but for most other software I find the updates unnecessary.

Let me give an example.
A new version of Keepass may come out with a security fix for one of the encryption algorithms, that I don’t use, and some UI improvements that I don’t care about. Why should I update it and risk regressions, new security bugs that might affect me, and foul play?

Whether or not you update, you “could” be vulnerable. But in practice, updating promptly generally keeps you safer. Using this (non-snap!) incident to claim the opposite makes no sense.

I am not using this incident to claim the opposite. I claim that even if, on average, a user is safer with updates than without - the user should still be allowed to make a choice. I will give an analogy in the spirit of the time.

Suppose that a certain virus has a mortality rate of 10%, and the vaccine for it has a mortality rate of 0.001%. If an adult person, who has the statistics and the mental capacity to understand them, thinks that he can increase his overall survival rate to above 0.999% by means other than the vaccine - would you advocate physically grabbing this person and shoving a needle up his arm?

Because this is what’s happening when I’m forced to install updates on my computer, just a bit more painful…