SecureDrop 1.7.0 upgrade fail


This thread is private but it will eventually be public. Last night all SecureDrop instances automatically upgraded to version 1.7.0 display the following error message when trying to reach the URL to upload documents (this one is for NRK).


The notifications sent by OSSEC reveal the problem:

OSSEC HIDS Notification.
2021 Jan 27 08:01:38

Received From: (app) X.X.X.X->/var/log/syslog
Rule: 1002 fired (level 2) -> "Unknown problem somewhere in the system."
Portion of the log(s):

Jan 27 08:01:38 app python[23556]: AttributeError: module 'config' has no attribute 'SESSION_EXPIRATION_MINUTES'

And the manual fix is straightforward:

  • From the Admin usb key
  • Open a terminal
  • ssh app
  • echo ‘SESSION_EXPIRATION_MINUTES = 120’ | sudo tee -a /var/www/securedrop/
  • sudo apt-get install -f

After this fix is applied, the SecureDrop is back (see Al Jazeera or JMM for instance). At the time of this message (9:50am CEST) some SecureDrop installations are not yet ugpraded and still run 1.6.0 as displayed on their landing page:


All SecureDrop instances in the FPF directory are monitored and FPF staff have received a message about the error. Every contact person for SecureDrop also received this:

The monitoring system operated by Freedom of the Press Foundation (Icinga) was unable to establish a connection to this SecureDrop instance via its public .onion address.

Icinga makes repeated connection attempts before sending this notice. Your SecureDrop may be offline, or the Tor network may be experiencing congestion.

Icinga will send a notice with the subject line "RECOVERY" when the service is reachable again.

If you would like to request advice, or if you want to stop receiving Icinga alerts, please contact us:

  - via - see for our GPG key

  - via the SecureDrop support portal, if you have an account - see

Although I fixed the problems wherever I could as described above, I also filed an issue at to share the fix. I’m 100% sure FPF staff quickly figured it out but it’s polite to do so.

The manual fix is easy but I’m not sure how it can be fixed with a package upgrade because the bug failed the post installation of the 1.7.0 package. I did not research it but I don’t see how unattended upgrades can recover automatically from a situation where apt-get install -f is required. If that’s not possible, every SecureDrop administrator will have to manually apply the fix, meaning some of them may be down for an extended period of time.

Which brings me to my question and food for thought for both of you (and the reason why I did not post this publicly): How could this situation be exploited by an adversary ?

Reading /etc/apt/apt.conf.d/50unattended-upgrades it looks like an automated fix is possible

// This option allows you to control if on a unclean dpkg exit
// unattended-upgrades will automatically run 
//   dpkg --force-confold --configure -a
// The default is true, to ensure updates keep getting installed
//Unattended-Upgrade::AutoFixInterruptedDpkg "true";

And SecureDrop does not change this setting to false. SecureDrop developers are presumably already working on a fix and all instances will come back online once 1.7.1 is published.

It looks like the issue has been introduced by 53d6809e989110426ba0d5b764f28cc39de855bb. The related Debian package is securedrop-app-code. /var/www/securedrop/ is installed by an Ansible task (not the debian package). There isn’t a security related change mentioned in the 1.7.0 changelog.

How could this situation be exploited by an adversary ?

I don’t see how it could be exploited.

1 Like

The issue was made public on the forum about an hour ago.

Same opinion, I don’t see how it could be used by an attacker

This upgrade failure created a few unusual patterns:

  1. All SecureDrop source interfaces went offline right after the upgrade.
  2. Each SecureDrop instance stayed offline during X seconds and eventually came back online as 1.7.0 (or 1.7.1).
  3. SecureDrop contact persons received a mail from notifying them of the downtime, when they are registered in the directory

What can be deduced from these patterns ?

  1. Someone monitoring them would gain knowledge of the Daily reboot time. But it already is possible to guess it because the source interface is not responsive every day at this precise reboot time. So nothing new.
  2. If the SecureDrop comes back online quickly (i.e. X is small) it means it is closely monitored. If the SecureDrop does not come back before 1.7.1 automatically fixes the issue, it could mean the instance is either not actively in use or not monitored. In itself I don’t see how it can be harmful, but it may be a useful insight for an adversary watching all organizations closely.
  3. The contact persons already receive encrypted emails on a regular basis from and I don’t see how this batch of notification would help an adversary gain additional knowledge.

What mitigation ?

  1. If the delay was more than a couple of hours, the organization can either (i) move the SecureDrop servers if their location is known (e.g. a newsroom), (ii) re-organize how downtime notifications are handled so they are acted upon faster.

I don’t see anything else. But I don’t really have the right mindset to think about those issues in depth (being a developer is different from being a security officer :wink: ) so I may be missing something significant. Food for thoughts!

1 Like
1 Like