A 403 Forbidden response from Nginx means the server understood the request and is refusing to fulfil it — but the status line by itself never tells you why. The same 403 forbidden Nginx error appears whether the file exists with the wrong permissions, a directory has no index file, SELinux blocked the read, an allow/deny directive matched, or PHP-FPM upstream rejected the request. Fixing it fast comes down to looking in the right places in the right order, and the very first place is always the error log — not the configuration.
This guide walks through the actual causes of an Nginx 403, in the order you should rule them out, with the exact commands at each step and the log lines that distinguish one cause from another.
What 403 actually means (and what it doesn't)
The HTTP spec is precise about this:
- 401 Unauthorized — the request is missing valid credentials. The server is telling the client "authenticate, then try again."
- 403 Forbidden — the server understood the request and the credentials (if any), and is refusing anyway. Re-authenticating will not help.
- 404 Not Found — the server has no representation of this URI. (Nginx will return 404 for missing files, not 403, unless something else is in play.)
When you get a 403 from Nginx specifically, one of these is true:
- Nginx itself reached the file system and was denied (permissions, ownership, SELinux).
- Nginx tried to render a directory and there was no
indexto serve andautoindexis off. - A directive in your config (
deny,allow,auth_basic,if,internal) actively rejected the request. - Nginx successfully proxied to an upstream (PHP-FPM, an app server) which itself returned 403.
- Nginx refused to follow a symlink (
disable_symlinks).
Everything below maps to one of those five branches.
Step 1 — Read the error log first
The Nginx access log will tell you that a 403 happened; the error log tells you why. Find it (error_log directive in nginx.conf, default /var/log/nginx/error.log) and tail it while you reproduce the request:
sudo tail -f /var/log/nginx/error.log
Then in another terminal:
curl -sv https://example.com/the-failing-path 2>&1 | head -40
The error log line you see is a near-perfect diagnosis. Common patterns:
| Error log message | Cause | Jump to |
|---|---|---|
directory index of "/var/www/site/" is forbidden |
No index file, autoindex off | Step 3 |
open() "/var/www/site/file.html" failed (13: Permission denied) |
File system permissions | Step 2 |
open() ... failed (13: Permission denied) + SELinux denials in audit.log |
SELinux | Step 4 |
access forbidden by rule |
Explicit deny directive |
Step 5 |
no user/password was provided for basic authentication |
auth_basic configured, no credentials |
Step 5 |
client intended to send too large body |
Not a 403 — that is 413; included so you don't chase the wrong thing | — |
FastCGI sent in stderr: "Access denied" |
Upstream (PHP-FPM, etc.) returned 403 | Step 6 |
unix:/run/php/php8.x-fpm.sock failed (13: Permission denied) |
Nginx cannot reach the PHP-FPM socket | Step 6 |
"/var/www/site/file.html" is forbidden (symlink not in same FS / not owned by ...) |
disable_symlinks matched |
Step 7 |
If the error log is silent when you reproduce the 403, raise the log level temporarily:
error_log /var/log/nginx/error.log debug;
Reload Nginx (sudo nginx -s reload), reproduce, and revert the level afterwards — debug is verbose and slow.
Step 2 — File and directory permissions
The most common cause by a wide margin. Nginx's worker process runs as a non-root user (www-data on Debian/Ubuntu, nginx on RHEL/CentOS/Rocky/Alma) and needs to:
- Read the requested file.
- Read and traverse every parent directory all the way up to
/.
Forgetting the traverse requirement is a classic 403. A file mode of 644 is correct, but if any parent directory is missing the x (execute / search) bit for "other", Nginx cannot reach the file.
Find the worker user
ps -o user,comm -C nginx
# USER COMMAND
# root nginx ← master
# www-data nginx ← worker (this is the one that matters)
Or check the config:
grep -E '^\s*user' /etc/nginx/nginx.conf
# user www-data;
Walk the path with namei -l
namei shows ownership and permissions for every component of a path. If any line is missing the x bit at the right column, you have your answer:
sudo -u www-data namei -l /var/www/site/index.html
# f: /var/www/site/index.html
# drwxr-xr-x root root /
# drwxr-xr-x root root var
# drwxr-xr-x root root www
# drwxr-x--- deploy deploy site ← problem: "other" cannot enter
# -rw-r--r-- deploy deploy index.html
In the example above, /var/www/site is 750 and only the owner and group can enter — www-data is neither, so it gets a 403.
Fix
Two clean options. Pick one — don't combine them.
Option A — group-based access (preferred for a deploy user + Nginx setup):
sudo chgrp -R www-data /var/www/site
sudo chmod -R g+rX /var/www/site
sudo find /var/www/site -type d -exec chmod g+s {} \; # new files inherit group
g+rX (capital X) gives execute only to directories and files that already have it for someone — exactly what you want.
Option B — make Nginx the owner:
sudo chown -R www-data:www-data /var/www/site
sudo find /var/www/site -type d -exec chmod 755 {} \;
sudo find /var/www/site -type f -exec chmod 644 {} \;
What never works
chmod -R 777 /var/www/siteis not a fix. It hides the problem for an afternoon and silently breaks SELinux, your deploy user's ownership semantics, and your security posture. Do not.- Setting the worker
usertorootto "make the 403 go away" is worse than 777.
Step 3 — Missing index file or autoindex off
Nginx returns 403 when a request resolves to a directory but neither of these is true:
- An
indexfile listed in theindexdirective exists in that directory. autoindex on;is set (which would render an HTML directory listing).
Error log signature:
[error] 1234#1234: *56 directory index of "/var/www/site/" is forbidden
Fix
If this directory should serve a default page, add it (index.html, index.php, etc.) — and make sure it appears in the index directive:
server {
root /var/www/site;
index index.html index.htm index.php;
# ...
}
If this directory should intentionally show a listing (typical for /downloads/):
location /downloads/ {
autoindex on;
autoindex_exact_size off;
autoindex_localtime on;
}
If this directory should never be browsed, return 404 explicitly so you stop confusing users with a 403:
location = /private/ {
return 404;
}
Step 4 — SELinux (RHEL/CentOS/Rocky/Alma) or AppArmor (Ubuntu/Debian)
Mandatory access control is the third-most-common cause. Permissions look correct, the worker user is right, and you still get 403. The kernel is denying the read at a layer below POSIX permissions.
SELinux
Check whether SELinux is enforcing:
getenforce
# Enforcing
Look for the matching denial. The clean way is ausearch:
sudo ausearch -m AVC -ts recent | tail -20
A typical Nginx-related denial:
type=AVC msg=audit(...): avc: denied { read } for pid=1234 comm="nginx"
name="index.html" dev="dm-0" ino=...
scontext=system_u:system_r:httpd_t:s0
tcontext=unconfined_u:object_r:default_t:s0 tclass=file permissive=0
The two lines that matter are scontext (Nginx's domain — httpd_t) and tcontext (the file's type — should be httpd_sys_content_t, not default_t).
Translate the denial into a one-line cause:
sudo ausearch -m AVC -ts recent | audit2why
Fix (typical web content)
Set the correct file context and persist it:
sudo semanage fcontext -a -t httpd_sys_content_t '/var/www/site(/.*)?'
sudo restorecon -Rv /var/www/site
If the application also needs to write to the directory (uploads, cache):
sudo semanage fcontext -a -t httpd_sys_rw_content_t '/var/www/site/uploads(/.*)?'
sudo restorecon -Rv /var/www/site/uploads
If Nginx is making outbound HTTP calls (proxy_pass to a remote URL, OAuth callbacks):
sudo setsebool -P httpd_can_network_connect 1
Don't setenforce 0 to "fix" 403s in production. Use targeted booleans and contexts.
AppArmor
Most Nginx packages on Ubuntu do not ship a confined AppArmor profile by default, so SELinux-style 403s are rare here. If yours does (or you have a custom profile), check denials in dmesg:
sudo dmesg | grep -i 'apparmor.*DENIED' | tail
The fix is to update the profile (/etc/apparmor.d/usr.sbin.nginx) or disable confinement for that binary (sudo aa-disable /etc/apparmor.d/usr.sbin.nginx) and reload.
Step 5 — allow / deny and auth_basic
Nginx will return 403 outright when an allow/deny rule matches the client, or when an auth_basic requirement is configured and credentials are missing or wrong (note: missing credentials produce 401, but a bad attempt against an IP allow whitelist still yields 403).
Find any deny, allow, auth_basic, or internal directives in your config:
sudo nginx -T 2>/dev/null | grep -nE 'deny|allow|auth_basic|internal' | head -30
nginx -T dumps the fully resolved configuration including all included files — much faster than tracing include directives by hand.
Common patterns and what they do
location /admin/ {
allow 10.0.0.0/8;
deny all; # everyone except 10.0.0.0/8 → 403
}
location /private/ {
auth_basic "restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}
location ~ /\. {
deny all; # block dotfiles → 403 (this is what blocks .git, .env)
}
location /internal-only/ {
internal; # only reachable via internal redirect → 403 to clients
}
If nginx -T shows a deny matching the request and the client IP, that is your 403. Either the rule is correct (the request really is unauthorised — fix the client) or the rule is wrong (relax or scope it).
Step 6 — PHP-FPM, FastCGI, and other upstreams
When Nginx proxies to PHP-FPM (or any upstream — Node, Python, Go, another Nginx) the 403 may come from the upstream, not Nginx itself. The error log distinguishes them:
- Nginx 403 —
[error] ... 403originating in the location/server block. - Upstream 403 — Nginx forwards what the upstream returned. The error log is usually quiet; the access log shows the 403.
- Socket failure — Nginx cannot even reach the upstream:
connect() to unix:/run/php/php8.2-fpm.sock failed (13: Permission denied)
Fix (PHP-FPM socket permissions)
The socket is owned and accessed by the PHP-FPM user; Nginx's worker user must be allowed to write to it. In /etc/php/8.x/fpm/pool.d/www.conf:
listen = /run/php/php8.2-fpm.sock
listen.owner = www-data
listen.group = www-data
listen.mode = 0660
Restart PHP-FPM (sudo systemctl restart php8.2-fpm) and confirm:
ls -l /run/php/php8.2-fpm.sock
# srw-rw---- 1 www-data www-data 0 May 9 09:22 /run/php/php8.2-fpm.sock
Fix (upstream returning 403)
If the upstream (PHP-FPM script, app server) is the one returning 403, Nginx is innocent — debug the application. Useful starting points:
- Check the application's own log, not just Nginx's. WordPress writes to
wp-content/debug.logifWP_DEBUGis on; Laravel tostorage/logs/laravel.log; Django to whereverLOGGINGis pointed. - Reproduce the request directly against the upstream, bypassing Nginx, to confirm the source:
curl --unix-socket /run/php/php-fpm.sock http://localhost/...(you usually need a small helper script — Nginx normally builds the FastCGI envelope).
Step 7 — Symlinks and disable_symlinks
If you serve files from a tree that contains symlinks (typical for atomic deploys: current/ → releases/2026-05-09/), Nginx may refuse to follow them:
disable_symlinks if_not_owner from=$document_root;
Error log signature:
[error] ... "/var/www/site/current/index.html" is forbidden (symlink ... is not in same filesystem ... or its owner ...)
Fix
Three options, in order of preference:
- Make ownership consistent — every symlink target should be owned by the same user as the link itself. For atomic deploys, this usually means the deploy user owns both
currentandreleases/*. - Loosen the directive scope —
disable_symlinks if_not_owneris a sensible default; don't go todisable_symlinks offsite-wide just to fix one path. Confine the change to thelocationblock that needs it. - Use a
rootthat points at the resolved target — e.g. pointrootat/var/www/site/releases/2026-05-09(rendered by your deploy script) instead of atcurrent/. This sidesteps the symlink check entirely, at the cost of one extra step in the deploy.
Operational tips
- Always start with
nginx -tandnginx -T.-tvalidates the config;-Tprints the fully merged config including allincludes. Most "I already checked the config" 403 mysteries dissolve in front ofnginx -T | less. - Turn off
try_fileson directories you mean to handle differently. A common 403 trap:try_files $uri $uri/ =404;will fall through to$uri/which then trips the directory-without-index 403 in step 3. Usetry_files $uri =404;if you don't want directory fallbacks. - Be careful with
locationorder. Nginx picks the most specific match by precedence rules, not by file order. A misplacedlocation ~ \.(php|html)$block can silently take over a request you thought was being served from a more general block — and bring with it a differentdenyset. - Trailing slashes matter.
/fooand/foo/can hit differentlocationblocks. If 403 appears only for one of them, that is your clue. - Log the request ID. Add
$request_idto the access log format and surface it on a custom 403 page (or in headers). When users report a 403, you can find the exact log line in seconds. - Containers re-introduce permission bugs. Inside a container the user IDs may not match the host. A volume mount can land in the container as
nobody:nogroupeven though it looks fine on the host. Alwaysls -ln(numeric IDs) inside the container, not on the host.
Catch 403s before users do
Hitting a 403 in your own browser is the lucky case. The painful one is finding out from a customer that an entire directory has been returning 403 for the last six hours because a deploy script changed an owner.
Configure HTTP monitoring to assert a 200 status on the URLs that matter, not just "the site responds":
- For each high-value path (
/,/login,/api/health, key product pages), set up a check that fails on any non-2xx response — not just on connection timeouts. - Combine with keyword matching so a soft-403 (the page renders but contains "Access Denied" or "Permission denied") is also caught.
- For protected paths that should return 401 to anonymous traffic, configure the check to expect 401 — not 200 and not "any 4xx is fine".
Xitoring's website monitoring checks every URL from multiple regions, asserts the expected status code and response keyword, and alerts on the first failure — not the third in a row. Pair it with the Nginx integration on the host itself so you see the 403 spike in your dashboard and the underlying nginx process metrics on the same timeline. The combination usually answers "is this the app, the web server, or the network?" before the on-call engineer has finished logging in.
Summary
When Nginx returns 403 Forbidden, work through this order:
- Read
/var/log/nginx/error.log. The exact phrase in the log line names the cause 90% of the time. - Permissions. Run
sudo -u <worker_user> namei -l /path/to/fileand confirm every directory has the search bit for the worker user. Fix withchmod g+rX+ correct group ownership. - Missing index / autoindex. If the URL ends in a directory, ensure an index file exists or set
autoindex on;. - SELinux / AppArmor.
ausearch -m AVC -ts recent | audit2whyon RHEL family;dmesg | grep apparmoron Ubuntu. Fix withsemanage fcontext+restorecon, notsetenforce 0. - Explicit
deny/auth_basic/internal.nginx -T | grep -E 'deny|allow|auth_basic|internal'to find the matching directive. - Upstream (PHP-FPM, app server). Check the upstream's own log; verify socket ownership on
/run/php/php-fpm.sock. - Symlinks. Look for
(symlink ...)in the error log; align ownership or scopedisable_symlinksmore narrowly.
Treat the 403 as data, not as a single symptom. Each cause has a distinct error log signature, and the seven-step walk above takes most engineers under ten minutes once they have done it twice. Wire HTTP status checks into your monitoring so the next 403 outbreak is caught before a user has to file the ticket.