net::err_cert_common_name_invalid is one of the more confusing TLS errors because the certificate the server is presenting is, in every other respect, valid. It is signed by a trusted CA. It is not expired. The chain trusts to a public root. The browser is rejecting it for one specific reason: the hostname you typed in the URL is not listed in the certificate's Subject Alternative Name (SAN) field. The cert is genuine — just not issued for the hostname you connected to.
The fix depends on whether you control the server, the DNS, the browser, or none of the above. This guide walks through every common cause of err_cert_common_name_invalid, the exact command to confirm each one, and what to do about it. It also covers the client-side scenario (you're a user staring at the warning) since the audience for this error is split roughly evenly between operators and end users.
What the error actually means
Before fixing, get the meaning right. Modern Chromium-based browsers (Chrome, Edge, Brave, Opera) only check the SAN extension when matching a hostname to a certificate — they ignore the legacy Common Name (CN) field entirely. This has been Chrome's behaviour since version 58 (April 2017).
So err_cert_common_name_invalid actually means: "the hostname you connected to is not in this certificate's SAN list." The error name is a historical artifact — it predates the CN deprecation and was never renamed.
What it specifically does not mean:
- It does not mean the cert is expired (that's
err_cert_date_invalid). - It does not mean the issuer is untrusted (that's
err_cert_authority_invalid). - It does not mean the chain is broken (that's also
err_cert_authority_invalid, plus several others). - It does not mean a man-in-the-middle attack — though one of the legitimate causes (captive portals, corporate proxies) is technically an authorised MITM.
Step 1 — Confirm what cert is actually being served
You cannot debug this from the browser warning alone. Use openssl to dump the cert the server is currently sending:
echo | openssl s_client -servername example.com -connect example.com:443 2>/dev/null \
| openssl x509 -noout -subject -issuer -ext subjectAltName
Pay attention to all three fields:
subject=CN = example.com
issuer=C = US, O = Let's Encrypt, CN = R3
X509v3 Subject Alternative Name:
DNS:example.com, DNS:www.example.com
Now compare the SAN list against the URL the user typed:
- Hitting
https://example.com/→example.commust appear inDNS:entries. ✅ above. - Hitting
https://www.example.com/→www.example.commust appear. ✅ above. - Hitting
https://api.example.com/→api.example.commust appear. ❌ above — would trigger the error. - Hitting
https://192.0.2.10/→ anIP:SAN entry must exist for192.0.2.10. (See Cause 6 below.)
Whichever hostname is not in the SAN list is your error. The next step is to find out why the server is serving that particular cert — which has eight common causes.
For the full openssl playbook (chain verification, SAN extraction, SNI handling), see How to check and verify SSL certificates with OpenSSL.
Cause 1 — www. vs bare-domain mismatch
By a wide margin the most common cause. The cert covers example.com but the user typed www.example.com (or vice versa).
Confirm
echo | openssl s_client -servername www.example.com -connect www.example.com:443 2>/dev/null \
| openssl x509 -noout -ext subjectAltName
# Lists only DNS:example.com ← www is missing
Fix (server side)
Re-issue the cert with both names in the SAN list. With certbot:
sudo certbot --nginx -d example.com -d www.example.com
With a commercial CA, generate a new CSR that includes both:
openssl req -new -newkey rsa:2048 -nodes \
-keyout example.com.key -out example.com.csr \
-subj "/CN=example.com" \
-addext "subjectAltName = DNS:example.com,DNS:www.example.com"
Submit, validate, install — full walkthrough in How to renew an SSL certificate.
Fix (DNS / app side)
If you intentionally only want to serve one of the two, redirect the other at the HTTP layer. With Nginx, a separate server block on port 80 redirecting before TLS even tries:
server {
listen 80;
server_name www.example.com;
return 301 https://example.com$request_uri;
}
This is cleaner than re-issuing if www. is genuinely a redirect-only host.
Cause 2 — Wildcard certificate depth mismatch
A wildcard cert for *.example.com covers exactly one level of subdomain:
- ✅
api.example.com - ✅
dashboard.example.com - ❌
staging.api.example.com— two levels deep - ❌
example.com— wildcard does not cover the bare apex
The most common surprise: a *.example.com cert does not cover example.com itself. You need either two SANs (example.com + *.example.com) or two separate certs.
Confirm
echo | openssl s_client -servername staging.api.example.com -connect staging.api.example.com:443 2>/dev/null \
| openssl x509 -noout -ext subjectAltName
# DNS:*.example.com ← does not match staging.api.example.com (two levels)
Fix
Either reduce the subdomain depth (move staging.api.example.com to staging-api.example.com), or issue a second wildcard at the deeper level (*.api.example.com), or list the specific hosts as SANs alongside the wildcard.
Cause 3 — SNI misconfiguration (server returns a default cert)
When a server hosts multiple TLS sites on the same IP and port, it uses SNI (the hostname the client sends in the TLS ClientHello) to pick which cert to serve. If SNI is misconfigured — or if the client doesn't send SNI — the server returns a fallback / default cert, which is usually for a different hostname.
Confirm
Run the s_client check with and without the -servername flag:
# Without SNI — default cert
echo | openssl s_client -connect example.com:443 2>/dev/null \
| openssl x509 -noout -subject
# With SNI — should be the right cert
echo | openssl s_client -servername example.com -connect example.com:443 2>/dev/null \
| openssl x509 -noout -subject
If both return the same wrong cert, the server has no server block matching example.com — Nginx falls back to the first server block (or one marked default_server).
Fix
In Nginx, ensure each hostname has its own server block with the right server_name and ssl_certificate:
server {
listen 443 ssl http2;
server_name example.com www.example.com;
ssl_certificate /etc/nginx/ssl/example.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/example.com/privkey.pem;
# ...
}
server {
listen 443 ssl http2;
server_name api.example.com;
ssl_certificate /etc/nginx/ssl/api.example.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/api.example.com/privkey.pem;
# ...
}
Reload (nginx -t && systemctl reload nginx) and re-test with -servername.
In Apache, every <VirtualHost *:443> should have a matching ServerName/ServerAlias and its own SSLCertificateFile. In load balancers (ALB, GCP, HAProxy), the SNI-to-cert mapping is configured separately — verify it lists every hostname you serve.
Cause 4 — Self-signed certificate or internal CA
A self-signed cert (or one signed by a private CA your browser doesn't trust) typically triggers err_cert_authority_invalid, but if the cert is signed by a trusted CA but issued for an internal hostname (a development / staging cert reused on production), you'll see err_cert_common_name_invalid instead.
Confirm
echo | openssl s_client -servername example.com -connect example.com:443 2>/dev/null \
| openssl x509 -noout -subject -issuer -ext subjectAltName
# subject=CN = staging.internal.example.local
# issuer= CN = Let's Encrypt R3
# DNS:staging.internal.example.local
If the SAN list shows hostnames you don't recognise, you're serving the wrong cert from disk.
Fix
Find the right cert (or issue a new one for the correct hostname) and update the ssl_certificate path. Verify the swap with openssl s_client -servername against the public hostname.
Cause 5 — Captive portal or corporate / school MITM proxy (client-side)
This is the one cause that you, the operator, may not have any control over. The user is on a network where a captive portal (hotel Wi-Fi, airport Wi-Fi) or a corporate / school proxy is intercepting TLS connections to inspect traffic. The proxy serves its own certificate for whatever URL the user typed — and that cert won't have your hostname in its SAN list, so the browser shows err_cert_common_name_invalid.
Confirm
- The error appears only on certain networks (typically a guest Wi-Fi or a managed corporate / school network) and disappears on cellular data or a different network.
- In the browser, click the lock icon → "Certificate is not valid" → look at the issuer. If it says something like "Cisco Umbrella Root CA", "BlueCoat Trust Root", or your employer's name, you're behind a TLS-inspecting proxy.
Fix
For end users:
- Switch to a different network (cellular hotspot is the easiest test).
- If you're on a corporate machine that does trust the proxy CA, the certificate would be accepted — the warning means the proxy CA isn't installed in your browser. Ask your IT team.
- On a captive-portal network, complete the captive portal sign-in first; some captive portals only release traffic after sign-in and the warning is incidental to that.
For operators, there is nothing you can do server-side to fix this — the user's network is intercepting the connection before it reaches you. If you see a sudden surge of err_cert_common_name_invalid reports from users on corporate networks, the cause is almost certainly their employer's TLS-inspection proxy and the Subject field of the cert (visible in the user's browser) will name the proxy.
Cause 6 — IP address in the URL
Browsers will not match a cert's DNS SANs against an IP address. To use a cert for an IP-based URL, the cert must contain an explicit IP: SAN entry — and most public CAs won't issue those.
Confirm
echo | openssl s_client -connect 192.0.2.10:443 2>/dev/null \
| openssl x509 -noout -ext subjectAltName
# X509v3 Subject Alternative Name:
# DNS:example.com ← no IP entry
Fix
Use the hostname, not the IP. Add the hostname to your local hosts file (or DNS) if necessary and access via that name. For internal / private services where you legitimately need an IP-based URL, issue a cert from your internal CA with the IP in the SAN list:
openssl req -new -newkey rsa:2048 -nodes \
-keyout server.key -out server.csr \
-subj "/CN=192.0.2.10" \
-addext "subjectAltName = IP:192.0.2.10,DNS:server.internal"
(This requires your internal CA to honor IP SANs — most do.)
Cause 7 — Hosts file or DNS override pointing to the wrong server
The user's hosts file (or split-DNS, or a VPN's DNS server) is pointing the hostname at a different server's IP — one that serves a cert for a different hostname. Often a leftover entry from a past test.
Confirm
# What does DNS resolve to?
dig +short example.com
# What does the user's machine actually use?
getent hosts example.com # Linux
nslookup example.com # Windows / macOS
If the resolved IP differs from public DNS — or if /etc/hosts (or C:\Windows\System32\drivers\etc\hosts) contains an entry — that's the cause.
Fix
Remove the stale hosts entry. On a corporate machine, check that the active VPN's DNS server isn't returning a stale internal record.
Cause 8 — The cert really is issued for the wrong domain
Sometimes the explanation is the boring one: the renewal generated a cert for the wrong hostname (typo in the CSR, wrong domain selected in the CA portal), and nobody noticed before deployment.
Confirm
openssl s_client -servername example.com -connect example.com:443 </dev/null 2>/dev/null \
| openssl x509 -noout -subject -ext subjectAltName
If the subject and SANs simply don't match the hostname you expected — and the issuer is your real CA — you've issued a cert for the wrong domain. Re-issue with the correct hostname; the wrong cert can be left to expire or revoked via the CA portal.
Quick triage flowchart
When you land on err_cert_common_name_invalid, the four-question triage:
- What's in the cert's SAN list?
openssl s_client -servername host -connect host:443 | openssl x509 -noout -ext subjectAltName. If your hostname isn't there, the diagnosis is one of causes 1, 2, 4, or 8. - Does the cert change when you drop
-servername? Yes → server is misrouting due to SNI config. Cause 3. - Does the issuer name match a known TLS-inspection product or your employer? Yes → cause 5 (network MITM); not fixable on the server.
- Does
digagree withgetent hosts/nslookup? No → cause 7 (DNS / hosts file override).
In practice, cause 1 (www. mismatch) and cause 3 (SNI misconfiguration) account for ~80% of legitimate cases for site operators. Cause 5 (corporate MITM) accounts for most user-side reports where "the site is broken on my work laptop only".
Client-side workarounds (and the security caveat)
Sometimes you need to access a service now and the cert mismatch is known and acceptable (an internal staging environment, a freshly-deployed service before DNS propagation, a known captive-portal proxy you're authenticated to). Two safe-ish workarounds:
- Bypass the warning in Chrome temporarily: on the warning page, type
thisisunsafe(no prompt — just type it on the page). Chrome will accept the cert for that session. Do not do this on a public network or for a service that handles credentials. - Add the cert / CA to your local trust store if you control the cert. macOS: Keychain Access → drag in the cert → Trust → "Always Trust". Linux:
update-ca-certificatesafter dropping the PEM into/usr/local/share/ca-certificates/. Windows:certmgr.msc→ Trusted Root Certification Authorities.
Both bypass paths are real attack surface. Don't make them the default. If err_cert_common_name_invalid is appearing in production, fix the cert — don't tell users to click through.
Catch a broken cert mismatch before users do
The painful version of err_cert_common_name_invalid is "we deployed a renewal at 02:00 and 30% of users have been seeing the warning since". Browsers don't tell you when this happens — your monitoring does, or nobody does.
Continuous SSL monitoring closes the gap by:
- Connecting to your endpoint with explicit SNI for every hostname you operate, so a cert that's served correctly for
example.combut missing forapi.example.comis caught instantly. - Validating the SAN list against the hostnames you've configured to monitor — not just "is there a cert", but "does the cert actually cover this name".
- Checking the chain trusts to a public root from a clean network, so a corporate-MITM-style failure (your CDN suddenly serving the wrong cert) is visible.
- Alerting on changes in subject / issuer / SAN list — a renewal that issued the right cert with the wrong SANs trips this immediately.
Xitoring's SSL certificate monitoring does all of these from multiple regions and pages on the first failure. Pair it with HTTPS uptime monitoring to catch the case where the cert is fine but the redirect chain to your apex domain (the www ↔ apex case in cause 1) is misconfigured. For the broader operational picture, see How to renew an SSL certificate and How to check and verify SSL certificates with OpenSSL.
Summary
net::err_cert_common_name_invalid means the cert is valid in every way except this one: the hostname you connected to is not in its SAN list. To debug it:
- Dump the served cert.
openssl s_client -servername host -connect host:443 | openssl x509 -noout -subject -issuer -ext subjectAltName. - Compare the URL hostname against the SAN list. Whichever name is missing is your error.
- Check
www.vs bare domain (cause 1) — by far the most common cause; fix by re-issuing with both SANs or redirecting one to the other. - Check wildcard depth (cause 2) —
*.example.comcovers one subdomain level only and not the apex. - Re-run with and without
-servername(cause 3) to detect SNI misconfiguration; if the cert changes, the server has no specific block for your hostname. - Look at the issuer (cause 5). If it names a TLS-inspection product or an employer, the user is behind a corporate MITM and there is nothing to fix server-side.
- Compare DNS resolution against
/etc/hosts(cause 7) for stale local overrides. - Don't reach for
thisisunsafeas a fix in production — that bypass is for incidental access, not user-facing failures.
The fastest way to never see err_cert_common_name_invalid in production is continuous SSL monitoring with SAN-coverage assertions: a check that fails the moment a renewal misses a hostname, before the first user hits it. The eight causes above are all small fixes once identified — the cost is in the time between deploy and detection, and that's exactly what monitoring eliminates.