Network

Setting HTTP headers for a static site on AWS CloudFront

TL:DR This is way, way more complicated than it needs to be

For a very long time, I ran this site off my cloud server in the US. When I moved to London, I started experiencing the painful impact of the ~100ms latency on the loading time for images and videos, and decided to move to a Content Delivery Network (CDN) with global reach. Unfortunately, most CDNs have steep minimum spend requirements that are excessive for a low-traffic site like this one. Amazon’s CloudFront is an exception, and my hosting costs are in the vicinity of $20 per month, which is why I settled for it despite my dislike for Amazon.

Serving a static site is not just about putting content somewhere to be served over HTTPS. You also need to set up HTTP headers:

  • Cache-Control headers to ensure static content isn’t constantly checked for changes.
  • Security Headers to enable HSTS and protect your users from abuse like Google FLoC, sites that iframe your content or XSS injection.

In my original nginx configuration, this is trivial if a bit verbose, just add:

expires: max;
add_header Strict-Transport-Security "max-age=31536000; includeSubDomains; preload";
add_header Content-Security-Policy "default-src 'self' https://*.majid.info/ https://*.majid.org/; object-src 'none'; frame-ancestors 'none'; form-action 'self' https://*.majid.info/; base-uri 'self'";
add_header X-Frame-Options "SAMEORIGIN" always;
add_header X-Xss-Protection "1; mode=block" always;
add_header X-Content-Type-Options "nosniff" always;
add_header Referrer-Policy no-referrer-when-downgrade;
add_header Feature-Policy "accelerometer 'none'; ambient-light-sensor 'none'; autoplay 'none'; battery 'none'; camera 'none'; display-capture 'none'; document-domain 'none'; encrypted-media 'none'; execution-while-not-rendered 'none'; execution-while-out-of-viewport 'none'; fullscreen 'none'; geolocation 'none'; gyroscope 'none'; layout-animations 'none'; legacy-image-formats 'none'; magnetometer 'none'; microphone 'none'; midi 'none'; navigation-override 'none'; oversized-images 'none'; payment 'none'; picture-in-picture 'none'; publickey-credentials-get 'none'; sync-xhr 'none'; usb 'none'; vr 'none'; wake-lock 'none'; screen-wake-lock 'none'; web-share 'none'; xr-spatial-tracking 'none'; notifications 'none'; push 'none'; speaker 'none'; vibrate 'none'; payment 'none'";
add_header Permissions-Policy "accelerometer=(), ambient-light-sensor=(), autoplay=(), battery=(), camera=(), cross-origin-isolated=(), display-capture=(), document-domain=(), encrypted-media=(), execution-while-not-rendered=(), execution-while-out-of-viewport=(), fullscreen=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), navigation-override=(), payment=(), picture-in-picture=(), publickey-credentials-get=(), screen-wake-lock=(), sync-xhr=(), usb=(), web-share=(), xr-spatial-tracking=(), clipboard-read=(), clipboard-write=(), gamepad=(), speaker-selection=(), conversion-measurement=(), focus-without-user-activation=(), hid=(), idle-detection=(), serial=(), sync-script=(), trust-token-redemption=(), vertical-scroll=(), notifications=(), push=(), speaker=(), vibrate=(), interest-cohort=()";

Doing this with CloudFront is much more complicated, however. You have to use a stripped-down and specialized version of their AWS Lambda “serverless” Function-as-a-Service framework, Lambda@Edge. This is very poorly documented, so this is my effort at rectifying that. When I first set this up, only Node.js and Python were available, but it seems Go, Java and Ruby were added since. I will use Python for this discussion. The APIs are quite different for each language so don’t assume switching languages is painless.

In the interests of conciseness, I am going to skip the parts about creating a S3 bucket and enabling it for CloudFront. There are many tutorials available online. I use rclone to deploy actual changes to S3, and make an AWS API call using awscli to trigger a cache invalidation, but software like Hugo has built-in support for AWS. Here is my deployment target in my Makefile:

deploy:
	git push
	git push github master
	-rm -rf awspublic
	env HUGO_PUBLISHDIR=awspublic hugo --noTimes
	-rm -f awsindex.db
	env HUGO_BASE_URL=https://blog.majid.info/ ./fts5index/fts5index -db awsindex.db -hugo
	rclone sync -P awspublic s3-blog:fazal-majid
	rsync -azvH awspublic/. bespin:hugo/public
	scp awsindex.db bespin:hugo/search.db
	ssh bespin svcadm restart fts5index
	aws cloudfront create-invalidation --distribution-id E************B --paths '/*'

First of all, even though Lambda@Edge runs everywhere CloudFront does, you cannot create functions everywhere, so you will need to go to the Lambda functions console then switch your region to US-West-1 in your AWS Console drop-down menu (even though my CloudFront and S3 are in eu-west-2 (London).

Click on the Create Function button.

Then choose Author from scratch, give a name (in my case, SecurityHeaders) and choose the Python 3.8 runtime.

In the development environment, click on lambda_function.py to edit the code of your function.

Click on Deploy (which is really more of a Save button), then press the orange Test button. Choose the Event Template cloudfront-modify-response-header. Save it, e.g. TestHeaders and click again on the Test button to verify the function executes without exceptions.

Here is the code I use:

def lambda_handler(event, context):
    cf = event["Records"][0]["cf"]
    response = cf["response"]
    headers = response["headers"]
    headers['strict-transport-security'] = [{
      "key": "Strict-Transport-Security",
      "value": "max-age=31536000; includeSubDomains; preload"
    }]
    headers['content-security-policy'] = [{
      "key": "Content-Security-Policy",
      "value": "default-src 'self' https://*.majid.info/ https://*.majid.org/; object-src 'none'; frame-ancestors 'none'; form-action 'self' https://*.majid.info/; base-uri 'self'"
    }]
    headers['x-frame-options'] = [{
      "key": "X-Frame-Options",
      "value": "SAMEORIGIN"
    }]
    headers['x-xss-protection'] = [{
      "key": "X-Xss-Protection",
      "value": "1; mode=block"
    }]
    headers['x-content-type-options'] = [{
      "key": "X-Content-Type-Options",
      "value": "nosniff"
    }]
    headers['referrer-policy'] = [{
      "key": "Referrer-Policy",
      "value": "no-referrer-when-downgrade"
    }]
    headers['feature-policy'] = [{
      "key": "Feature-Policy",
      "value": "accelerometer 'none'; ambient-light-sensor 'none'; autoplay 'none'; battery 'none'; camera 'none'; display-capture 'none'; document-domain 'none'; encrypted-media 'none'; execution-while-not-rendered 'none'; execution-while-out-of-viewport 'none'; fullscreen 'none'; geolocation 'none'; gyroscope 'none'; layout-animations 'none'; legacy-image-formats 'none'; magnetometer 'none'; microphone 'none'; midi 'none'; navigation-override 'none'; oversized-images 'none'; payment 'none'; picture-in-picture 'none'; publickey-credentials-get 'none'; sync-xhr 'none'; usb 'none'; vr 'none'; wake-lock 'none'; screen-wake-lock 'none'; web-share 'none'; xr-spatial-tracking 'none'; notifications 'none'; push 'none'; speaker 'none'; vibrate 'none'; payment 'none'"
    }]
    headers['permissions-policy'] = [{
      "key": "Permissions-Policy",
      "value": "accelerometer=(), ambient-light-sensor=(), autoplay=(), battery=(), camera=(), cross-origin-isolated=(), display-capture=(), document-domain=(), encrypted-media=(), execution-while-not-rendered=(), execution-while-out-of-viewport=(), fullscreen=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), navigation-override=(), payment=(), picture-in-picture=(), publickey-credentials-get=(), screen-wake-lock=(), sync-xhr=(), usb=(), web-share=(), xr-spatial-tracking=(), clipboard-read=(), clipboard-write=(), gamepad=(), speaker-selection=(), conversion-measurement=(), focus-without-user-activation=(), hid=(), idle-detection=(), serial=(), sync-script=(), trust-token-redemption=(), vertical-scroll=(), notifications=(), push=(), speaker=(), vibrate=(), interest-cohort=()"
    }]
    headers['x-fm-version'] = [{
      "key": "x-fm-version",
      "value": str(context.function_version)
    }]
    # caching
    if "request" in cf and "uri" in cf["request"]:
      url = cf["request"]["uri"]
      ext = url.split('.')[-1].lower()
      if url.endswith('/') or ext in ('html', 'gif', 'png', 'jpg', 'jpeg', 'ico', 'css', 'js', 'eot', 'woff', 'mp4', 'svg'):
        headers['expires'] = [{
          "key": "Expires",
          "value": "Thu, 31 Dec 2037 23:55:55 GMT"
        }]
        headers['cache-control'] = [{
          "key": "Cache-Control",
          "value": "max-age=315360000, immutable"
        }]
        
    return response

You will need to modify the hardcoded value for Content-Security-Policy, most likely you don't want your images and assets to be only served from https://*.majid.info/... Also, I cache all HTML forever in the browser, which may be more aggressive than you want if you update content more frequently than I do.

Before you can set up the hook, you will need to deploy your code to Lambda@Edge.

Now, this is very important. There are 4 different places a Lambda@Edge function can hook into.

If you deploy your function in the wrong place, most likely you will cause HTTP 500 errors until you can delete the bad trigger and redeploy, a process that takes an interminable 5–10 minutes to percolate through the CloudFront network (ask me how I know...). The hook (event trigger in Lambda@Edge parlance) is Viewer Response, unfortunately the deployment dialog defaults to Origin Request.

Click the disclaimer checkbox and press the Deploy button. It will take a few minutes to deploy to CloudFront, and then you can use curl or your browser’s developer console to verify the headers are sent. I include a header X-FM-Version to verify which version of the function was deployed.

fafnir ~>curl -sSL -D - -o /dev/null 'https://blog.majid.info/hsts-preload/'
HTTP/2 200 
content-type: text/html; charset=utf-8
content-length: 26260
x-amz-id-2: 3ndAsEvUgHDhUYxok9kDnaNCUeQ8QMCbVURoiyjQHc699mrHQvJpN7xwgUeAp7Ir/9Pd1sLwtOU=
x-amz-request-id: 0NDCZD7JEG55903A
date: Wed, 28 Apr 2021 17:55:17 GMT
x-amz-meta-mtime: 1618529322.342819304
last-modified: Thu, 15 Apr 2021 23:28:59 GMT
etag: "33eb01a86db2b3f800c7bee0b5c10c11"
server: AmazonS3
vary: Accept-Encoding
strict-transport-security: max-age=31536000; includeSubDomains; preload
content-security-policy: default-src 'self' https://*.majid.info/ https://*.majid.org/; object-src 'none'; frame-ancestors 'none'; form-action 'self' https://*.majid.info/; base-uri 'self'
x-frame-options: SAMEORIGIN
x-xss-protection: 1; mode=block
x-content-type-options: nosniff
referrer-policy: no-referrer-when-downgrade
feature-policy: accelerometer 'none'; ambient-light-sensor 'none'; autoplay 'none'; battery 'none'; camera 'none'; display-capture 'none'; document-domain 'none'; encrypted-media 'none'; execution-while-not-rendered 'none'; execution-while-out-of-viewport 'none'; fullscreen 'none'; geolocation 'none'; gyroscope 'none'; layout-animations 'none'; legacy-image-formats 'none'; magnetometer 'none'; microphone 'none'; midi 'none'; navigation-override 'none'; oversized-images 'none'; payment 'none'; picture-in-picture 'none'; publickey-credentials-get 'none'; sync-xhr 'none'; usb 'none'; vr 'none'; wake-lock 'none'; screen-wake-lock 'none'; web-share 'none'; xr-spatial-tracking 'none'; notifications 'none'; push 'none'; speaker 'none'; vibrate 'none'; payment 'none'
permissions-policy: accelerometer=(), ambient-light-sensor=(), autoplay=(), battery=(), camera=(), cross-origin-isolated=(), display-capture=(), document-domain=(), encrypted-media=(), execution-while-not-rendered=(), execution-while-out-of-viewport=(), fullscreen=(), geolocation=(), gyroscope=(), magnetometer=(), microphone=(), midi=(), navigation-override=(), payment=(), picture-in-picture=(), publickey-credentials-get=(), screen-wake-lock=(), sync-xhr=(), usb=(), web-share=(), xr-spatial-tracking=(), clipboard-read=(), clipboard-write=(), gamepad=(), speaker-selection=(), conversion-measurement=(), focus-without-user-activation=(), hid=(), idle-detection=(), serial=(), sync-script=(), trust-token-redemption=(), vertical-scroll=(), notifications=(), push=(), speaker=(), vibrate=(), interest-cohort=()
x-fm-version: 22
expires: Thu, 31 Dec 2037 23:55:55 GMT
cache-control: max-age=315360000, immutable
x-cache: Hit from cloudfront
via: 1.1 f655cacd0d6f7c5dc935ea687af6f3c0.cloudfront.net (CloudFront)
x-amz-cf-pop: AMS54-C1
x-amz-cf-id: QkB3rN2hWiI8ah_EJ3x3bvjgbm_BrqhFG1GJ_f4po-Mc2rs_TjTF-g==

Needless to say, because of the convoluted nature of this process, and the high likelihood of making mistakes, you should test this on a non-production site before you try this on a live site.

If by error you associated the lambda function with the wrong event trigger, you can delete it by going through the different deployed versions of your function, finding the trigger and deleting it.

HSTS: surprisingly rare

HTTP Strict Transport Security (HSTS) is a critical security feature that allows a site to say “always use the secure HTTPS version, not the insecure unencrypted one”. There is a chicken-and-egg effect where the first time you access a website, you have no way to know if your site has HSTS turned on or not without accessing it, so browsers distribute a “HSTS Preload” list of domains for which it is turned on even if you have never accessed it before, as explained by Adam Langley of the Google Security Team. On Chromium based browsers you can check by accessing chrome://net-internals/#hsts. Yours truly is on the list, which means that almost every single device on the planet has a file with my name in it, to my never-ceasing amusement.

Someone asserted that most e-commerce and financial sites are registered with HSTS Preload. I have a pretty jaundiced view of banks' security, the fact most of them consider sending 6-digit codes by SMS a valid form of two-factor authentication leads me to believe they mostly engage in security theater. So I used the official Google Chrome HSTS Preload portal to check.

I was shocked to find out that in fact not only is HSTS Preload very rare, but even HSTS itself is hardly present. None of the sites I checked use either:

Not even Amazon.com has it, despite being a company that operates a Certificate Authority.

The only explanation I can think of is that this is a deliberate product decision to make life easier on those annoying free WiFi with captive portals, at the expense of security.

Captive portals are those WiFi networks that don’t support IEEE 802.11u Hotspot 2.0, which means that instead of showing you a popup when you connect to WiFi asking you to agree to the terms of service, sign in to a paid WiFi service or whatever, it will instead hijack the first non-TLS HTTP request and show you the captive portal page instead (pro tip: use neverssl.com as the first page you access on those portals). If you were to only access https://amazon.com/, you would hang forever, whereas with http://amazon.com/ you would first get the captive portal page, then on reload the actual Amazon page.

The flip side is that anyone can set up a WiFi pineapple and SSLstrip in a Starbucks to impersonate their free WiFi, hijack your connection by issuing a deauthentication frame to force you to disconnect from Starbucks' WiFi and connect instead to your fake Starbucks WiFi, and then the attacker can use the SSL stripping described by Adam Langley to steal your Amazon password, even if you have two-factor authentication enabled. Given how easy Amazon has made it to impersonate them, I am surprised this kind of scam is not more prevalent.

My Broadband Setup

TL:DR Setting up secure and resilient Internet access in a country with sub-par infrastructure

I moved to the UK, a country that was a leader in Europe for PC adoption and early telecoms deregulation, but has since become one of the worst for the quality of its broadband through misguided laisser-faire policies. The only fixed broadband option available in my apartment is BT OpenReach’s pathetic VDSL service1 (resold by Vodafone), which advertises 72 Mbps but I am lucky to get 40 Mbps down and 10 Mbps up.

There are several problems with this state of affairs:

  • The network is very unreliable. I’ve had outages lasting 8 hours. It is so bad I wrote my own tool to track ping times and downtime.
  • The consumer ISPs in the UK are anything but network-neutral, due to government regulations mandating Orwellian nanny-filters on the connection2. At one point, I was unable to reach the Stack Overflow for over 2 days. It turns out for some unfathomable reason Vodafone decided to use Stack Overflow as the test site when they developed the government-mandated nanny-filter, and somehow that was deployed to production as per this highly instructive email thread.
  • The IP address is dynamic. While Vodafone does not change it too often and it can be worked around using Dynamic DNS, on cellular carriers the use of Carrier-Grade NAT (CGNAT) is rife, and it makes those connections highly unsuitable for:
    • self-hosting mail servers, calendar or other services
    • working from home where I need to have long-lived SSH connections doing critical work.

Recently I found out my mobile operator, Three, offers 5G fixed broadband service. I was skeptical, their 4G service in my NIMBY-infested area3 is abysmal, I hardly ever have any signal at all on Hampstead High Street, but it turns out their 5G service is excellent, offering 500 Mbps down and 30 Mbps up, with decent ping times, probably because they manage to buy a 100Mhz contiguous allocation of 5G spectrum. Unfortunately, the service is not officially offered in my post code, so I decided to roll my own using an unlimited SIM card and an unlocked Huawei CPE Pro 2 5G router.

I have been experimenting with VPNs of late, leading to my edgewalker self-hosted VPN server, and building a VLAN on my network that thinks it is in the US using a VPN provider that shall remain nameless because it still has not been blocked by Netflix. This allows my daughter to watch her favorite US shows that are not available in the UK because of the despicable geofencing of the content cartels, who want to gouge you depending on where you live (except in this case they are not even offering the gouging, just no content).

The natural next step is to make the entire network be connected to the Internet via a WireGuard VPN. Because WireGuard, was designed for mobile connections like IPsec/IKEv2/MOBIKE, it easily adapts to shifting IP addresses (as long as one end stays put). This means it can deal with CGNAT and also fail over from 5G to DSL and back without breaking a sweat or even dropping a session.

Unfortunately there are side-effects to using a self-service VPN hosted by a cloud provider:

  • Netflix, Amazon and the BBC will refuse to serve video to you. I had to work around it by creating a special VLAN for VPN-averse devices (the LG Smart TV, AppleTV 4K and any other streamers in my household). This VLAN is bridged to the Huawei in a way that stops the offensive STP packets, so it is as if they were plugged in directly into the Huawei. This is not a solution for when we want to watch video from out iDevices, however.
  • The VPN encapsulation reduces the maximum data size (MTU) from the standard 1500 bytes of Ethernet to 1380 or so. Some sites have broken Path MTU Discovery (I found out the hard way DuckDuckGo is one of them), which means by blocking ICMP packets the server does not realize their large packets are not getting through, keeps retrying in vain until the browser times out in disgust. Setting the OpenBSD PF scrub (no-df) option took care of that.
  • Then there is the bizarre phenomenon by which Google thinks my IP is in the United Arab Emirates. I do not know how, IP2Location thinks it is in the Netherlands, and MaxMind that it is in the UK (as it is). I tried again with some other Vultr servers and kept being located to the UAE or Saudi Arabia. My best guess is that Google builds its own IP geolocation database using GPS data from Android phones, and that some brave souls in the UAE or Saudi Arabia used a VPN service running on Vultr servers, and that caused the Vultr IPs to be associated with the those countries. The only way I found to resolve that was to keep creating virtual servers and additional IPs until I found a pair that did not locate to the UAE or Saudi Arabia. Now Google thinks I am in the US rather than the UK, I can live with that.
  • Some services like Wikipedia will also block the device from edits, as it triggers a false positive for an open proxy. I sent an email to them on Saturday night and they had fixed that by next morning, whereas Google makes strenuous efforts to ensure you cannot reach a human within their organization, ever, and there is seemingly no way to prevent the defective IP geolocalisation from screwing things up (they disabled the /ncr workaround they used to have a few years ago). Tells you everything you know about the importance of customer service for a monopoly.

This is what I implemented, by replacing the too-limiting Ubiquiti Security Gateway in my UniFi switched and wireless network with an OpenBSD router that establishes a WireGuard VPN to a modified edgewalker running in the cloud with Vultr.

The configuration is quite complex because I have the following VLANs:

  • The default VLAN (which is actually not even a VLAN, as Ubiquiti gear is not really Enterprise-class and does not default to VLANs).
  • VLAN 2 for my office work-from-home Mac, I just do not trust the various antivirus (and other software that are required for compliance) anywhere near my personal networks
  • VLAN 4 for the VPN-averse devices as mentioned above
  • VLAN 666 for Internet of Things devices (at least those that can be operated without connecting to the LAN)
  • VLAN 1776 for my geofencing-busting freedom VPN that thinks it is in the USA
  • Not a VLAN, but the Ethernet connection between my OpenBSD box and the Huawei router runs on a dedicated interface because in a bizarre effort to be “helpful” it sends a stream of Spanning Tree Protocol (STP) packets that basically cause my Ubiquiti switched network to melt down. OpenBSD can block them, but seemingly UniFi does not give you that control (so much for security, then). VLAN 4 is bridged to this.

OpenBSD has a concept of routing domains that allows you to virtualize your network stack into multiple routing tables, the way you can with VRF on a Cisco. This has proved invaluable, as has managing the configuration files in git to ensure I can always back out failed changes, and using Emacs’s TRAMP mode to edit files remotely.

It is mostly running, I have yet to move the Vodafone VDSL PPPoE circuit over from the decommissioned USG to the OpenBSD router and set up an IGP or some other routing protocol to fail over the default route to the Internet underlying WireGuard if one of the two connections fails. I am sure I will discover oddities as I go.

5G is extremely sensitive to positioning. Moving the Huawei just 20cm along the window makes the difference between 300Mbps down/10Mbps up/20ms ping and 500/30/12ms.

Not everything is perfect, of course. Ping times have risen slightly, and are more variable, as can be expected of a wireless network with layers of VPN processing latency added.


  1. It is a travesty that the Advertising Standards Authority has allowed ISPs to deceptively advertise their lousy copper DSL networks as “full fibre” on the basis they have fiber somewhere, and that this was not laughed out of court. ↩︎

  2. The UK is not quite as bad an enemy of the Internet as Australia, but only just. After all, this is a country without a Constitution, without a Bill of Rights or separation of Church and State, with a monarchy that is far from merely ceremonial, and where the ruling party campaigned on a manifesto of “we need to cut back on human rights”. ↩︎

  3. NIMBYs do not like cellular towers and even Uber drivers remark on how bad reception is in Hampstead. ↩︎

Automating Epson SSL/TLS certificate renewal

Network-capable Epson printers like my new ET-16600 have a web-based user interface that supports HTTPS. You can even upload publicly recognized certificates from Let’s Encrypt et al, unfortunately the only options they offer is a Windows management app (blech) or a manual form.

When you have to upload this every month (that’s when I automatically renew my Let’s Encrypt certificates), this gets old really fast, and strange errors happen if you forget to do so and end up with an expired certificate.

I wrote a quick Python script to automate this (and yes, I am aware of the XKCDs on the subject of runaway automation):

#!/usr/bin/env python3
import requests, html5lib

# update these fields for your environment
URL = 'https://myepson.example.com/'
USERNAME = 'majid'
PASSWORD = 'your-admin-UI-password-here'
KEYFILE = '/home/majid/web/acme-tiny/epson.key'
CERTFILE = '/home/majid/web/acme-tiny/epson.crt'
CAFILE = '/home/majid/web/acme-tiny/lets-encrypt-r3-cross-signed.pem'

# step 1, authenticate
jar = requests.cookies.RequestsCookieJar()
set_url = URL + 'PRESENTATION/ADVANCED/PASSWORD/SET'
r = requests.post(set_url, cookies=jar,
                  data={
                    'INPUTT_USERNAME': USERNAME,
                    'access': 'https',
                    'INPUTT_PASSWORD': PASSWORD,
                    'INPUTT_ACCSESSMETHOD': 0,
                    'INPUTT_DUMMY': ''
                  })
assert r.status_code == 200
jar = r.cookies

# step 2, get the cert update form iframe and its token
form_url = URL + 'PRESENTATION/ADVANCED/NWS_CERT_SSLTLS/CA_IMPORT'
r = requests.get(form_url, cookies=jar)
tree = html5lib.parse(r.text, namespaceHTMLElements=False)
data = dict([(f.attrib['name'], f.attrib['value']) for f in
             tree.findall('.//input')])
assert 'INPUTT_SETUPTOKEN' in data

# step 3, upload key and certs
data['format'] = 'pem_der'
del data['cert0']
del data['cert1']
del data['cert2']
del data['key']

upload_url = URL + 'PRESENTATIONEX/CERT/IMPORT_CHAIN'
r = requests.post(upload_url, cookies=jar,
                  files = {
                    'key': open(KEYFILE, 'rb'),
                    'cert0': open(CERTFILE, 'rb'),
                    'cert1': open(CAFILE, 'rb')
                  },
                  data=data)

assert 'Shutting down' in r.text
print('Epson certificate successfully uploaded to printer.')

Update (2020-12-29):

If you are having problems with the Scan to Email feature, with the singularly unhelpful message “Check your network or WiFi connection”, it may be the Epson does not recognize the new Let’s Encrypt R3 CA certificate. You can address this by importing it in the Web UI, under the “Network Security” tab, then “CA Certificate” menu item on the left. The errors I was seeing in my postfix logs were:

Dec 29 13:30:20 zulfiqar mail.info postfix/smtpd[13361]: connect from epson.majid.org[10.0.4.33]
Dec 29 13:30:20 zulfiqar mail.info postfix/smtpd[13361]: SSL_accept error from epson.majid.org[10.0.4.33]: -1
Dec 29 13:30:20 zulfiqar mail.warn postfix/smtpd[13361]: warning: TLS library problem: error:14094418:SSL routines:ssl3_read_bytes:tlsv1 alert unknown ca:ssl/record/rec_layer_s3.c:1543:SSL alert number 48:
Dec 29 13:30:20 zulfiqar mail.info postfix/smtpd[13361]: lost connection after STARTTLS from epson.majid.org[10.0.4.33]
Dec 29 13:30:20 zulfiqar mail.info postfix/smtpd[13361]: disconnect from epson.majid.org[10.0.4.33] ehlo=1 starttls=0/1 commands=1/2

Apple iCalendar's buggy SNI

TL:DR If you use Apple’s calendar client software, do not run the server on an IP and port shared with any other SSL/TLS services.

I run my own CalDAV calendar server for my family and for myself. For a very long time I used DAViCal, but it’s always been a slight annoyance to set up on Apple devices because they don’t like DAViCal’s https://example.com/davical/caldav.php/majid URLs. What’s more, recent versions of iCalendar would pop up password prompts at random, and after re-entering the password a couple of times (once is not enough), would finally go on and work. The various devices would also all too often get out of sync, sometimes with the inscrutable error:

Server responded with “500” to operation CalDAVAccountRefreshQueueableOperation

requiring deleting the calendar account and recreating it by hand.

I tried replacing DAViCal with Radicale today, with the same flaky user experience, and I finally figured out why: Apple uses at least a couple of daemons to manage calendar and sync, including dataaccessd, accountsd and remindd (also CalendarAgent depending on your OS version). It seems some or all of them do not implement Server Name Indication (SNI) consistently. SNI is the mechanism by which a TLS client indicates what server it is trying to connect to during the TLS handshake, so multiple servers can share the same IP address and port, and is an absolutely vital part of the modern web. For example many servers use Amazon Web Services' Elastic Load Balancer or CloudFront services, which are used by multiple clients, if Amazon had to dedicate a separate IP address for each, it would break their business model1.

Sometimes, those daemons will not use SNI, which means they will get your default server. In my case, it’s password-protected with a different password than the CalDAV one, which is what triggers the “enter password” dialog. At other times, they will call your CalDAV server with dubious URLs like /.well-known/caldav, /principals/, /dav/principals/, /caldav/v2 and if your server has a different HTTP password for that and sends back a HTTP 401 status code instead of a 404 Not Found, well, that will also trigger a reauthentication prompt.

Big Sur running on my M1 MacBook Air seems to be more consistent about using SNI, but will still poke around on those URLs, triggering the reauthentication prompts.

In other words, the only way to get and Apple-compatible calendar server running reliably is to dedicate an IP and port to it that is not shared with anything else. I only have one IP address at home where the server runs, and I run other vital services behind HTTPS, so I can’t dedicate 443 to a CalDAV server. Fortunately, the configuration will accept the syntax example.org:8443 to use a non-standard port (make sure you use the Advanced option, not Automatic), but this is incredibly sloppy of Apple.


  1. Amazon does in fact have a Legacy Clients Support option, but they charge a $600/month fee for that, and if you need more than two, they will demand written justification before approving your request. ↩︎