I Tried to Put My Project Online for the First Time — Here's Every Error I Got
What did you imagine deploying your first project would feel like? For me, it was one clean command. A progress bar. A green checkmark. A live URL I could paste into a browser and say, "Look, it's real."
I thought deploying my first project would be simple: push the code, run a command, and get a live URL.
Instead, it took two days of terminal errors, configuration fixes, and learning that a production server is not just "my laptop, but online."
The project was a standard full-stack setup: a Next.js frontend, an Express backend, and a PostgreSQL database. I had spent three weeks building it locally, and by that point it felt stable. So when a friend asked if he could try it, I thought putting it online would be straightforward.
It was not.
This is the actual sequence of errors I hit, what each one meant, and what finally fixed them.
What I Was Trying to Deploy
The stack itself was not unusual. Next.js handled the frontend. Express handled the API. PostgreSQL stored the data.
I rented a basic Ubuntu VPS, pushed the code, and started setting things up. I had used a VPS before for a static site, so I assumed I already understood the hard part.
That assumption was wrong.
Most of the trouble came from one thing I did not understand clearly enough at the start: the server knows nothing about my local environment. It does not have my SSH keys, my installed packages, my database, my environment variables, or any of the invisible setup that made the app work on my machine.
That gap between building something and making it production-ready is easy to underestimate.
Error 1: Permission denied (publickey)
The first error showed up before I had even properly started. I tried to git clone my repo onto the server and GitHub refused entirely:
Permission denied (publickey)
My first thought was that something was wrong with GitHub access. It was not.
The actual issue was simpler: my laptop had an SSH key configured, but the server did not. As far as GitHub was concerned, this was a new machine with no authentication.
The fix was to generate a new SSH key on the server:
ssh-keygen -t ed25519 -C "you@email.com"
cat ~/.ssh/id_ed25519.pub
Then I copied that public key into GitHub SSH settings, started the SSH agent, and added the key.
The first useful lesson: The server is a separate machine with a separate identity. Nothing from local development carries over automatically.
Error 2: Cannot GET /
Once I got the code onto the VPS and started the backend, I typed the server IP into the browser and got:
Cannot GET /
At first, I treated it like a framework problem. But it was really a deployment setup problem. The app was not yet reachable the way I assumed it would be. My Express server was still bound too narrowly, and the port was not open for outside traffic.
The fix required two changes:
// server.js — bind to all interfaces, not just localhost
app.listen(5000, '0.0.0.0', () => console.log('Running'));
# open the port in UFW
ufw allow 5000
Binding to 0.0.0.0 lets Express accept external connections. Opening the port in UFW lets those requests actually reach the server. In my case, I needed both.
Error 3: ECONNREFUSED 127.0.0.1:5432
With Express responding, the next thing to break was the database connection:
ECONNREFUSED 127.0.0.1:5432
At first glance, it looked like PostgreSQL was down. The real problem was simpler: my app was still using local database assumptions, but PostgreSQL had not actually been installed and configured on the VPS yet.
My .env file still had values that made sense on my laptop:
DB_HOST=localhost
DB_PORT=5432
DB_USER=postgres
DB_PASSWORD=postgres
DB_NAME=myapp
On my machine, that pointed to a working local Postgres instance. On the server, it pointed to nothing.
The fix was to install PostgreSQL on the VPS, create the database and user, and update the environment to match:
CREATE USER myapp WITH PASSWORD 'yourpassword';
GRANT ALL PRIVILEGES ON DATABASE mydb TO myapp;
The question that would have saved time: For every environment variable, ask — does this value assume I am still on my own machine?
Error 4: No Error, Just Missing Data
This one took the longest to understand because the app looked almost correct.
The page loaded. The UI rendered. But the actual data was missing.
There was no dramatic crash. Just empty screens and failed requests in the browser network tab. I opened the network tab and saw requests failing quietly, all of them pointed at http://localhost:5000.
The issue was in my Next.js frontend. I had hardcoded the API URL as localhost:5000 during development. Server-side requests worked fine because Express actually was running on that machine's localhost. But client-side requests run in the visitor's browser — and in the browser, localhost:5000 means the visitor's own machine, not my VPS.
The fix was to separate the API configuration properly by environment:
# .env.production
NEXT_PUBLIC_API_URL=https://yourdomain.com/api
API_URL=http://localhost:5000
// frontend
const API_URL = process.env.NEXT_PUBLIC_API_URL;
Key Next.js rule: Only variables prefixed with
NEXT_PUBLIC_are sent to the browser. Everything else stays server-side. Once I understood that split, this whole class of errors made complete sense.
Error 5: 502 Bad Gateway
I had set up Nginx as a reverse proxy: port 80 takes incoming requests, passes them to Next.js on port 3000 or Express on port 5000 depending on the route. It worked.
Then I rebooted the server.
After that:
502 Bad Gateway
A 502 means Nginx is running but cannot get a valid response from the app behind it.
Problem 1: Node processes weren't surviving a reboot. The server came back online, but my apps did not. So I switched to PM2 for process management:
npm install -g pm2
pm2 start npm --name "frontend" -- start
pm2 start server.js --name "backend"
pm2 save
pm2 startup
The important parts were pm2 save and pm2 startup. They make sure both processes come back automatically after a reboot.
Problem 2: A typo in my Nginx config. Even after adding PM2, the 502 kept coming back. The real reason was proxy_pass http://localhost:300 instead of http://localhost:3000. One missing digit. Nginx was sending traffic to a port that did not exist.
sudo nginx -t
That command checks your config and tells you exactly what is wrong before you reload. It would have caught the typo immediately. I found out about it too late.
When It Finally Loaded
I added the missing digit, ran sudo nginx -t, reloaded Nginx, and typed the IP into my browser.
It loaded.
I sat there for a moment. Then I sent the link to my friend. Then I sent it to two other people who had not asked for it.
The app was nothing impressive. But I understood every layer of how it was actually running in a way I had not 48 hours before. That felt like something.
What Was Actually Going Wrong the Whole Time
Looking back, almost every error came from the same root cause: I was treating the server like it was my laptop.
My laptop had the repo access configured. It had PostgreSQL running. It had the correct .env values. It had all the assumptions I had built over three weeks of local development.
The VPS had none of that.
Once I started seeing deployment as the process of recreating the right environment from scratch, the errors stopped feeling random. They started making sense.
That shift was the real turning point. Not one magic command. Not one hidden fix. Just a better model of what deployment actually is.
What I Would Do First Next Time
Before deploying anything similar again, I would go through the environment file first. For every variable, I would ask:
Does this point to something on my local machine?
What should this value be on the server?
Is this meant for the backend, the frontend, or both?
Does this assume a service already exists?
Then I would map the full request flow clearly before touching anything:
Browser → Nginx → Next.js / Express → PostgreSQL
That sounds basic, but doing it upfront would have prevented most of the confusion.
I would also stop relying on manual process starts earlier. PM2 should be part of the setup from the beginning, not something added after the first reboot breaks everything.
What I Understand Now
Deployment does not fail because production is mysterious. It fails because local development hides too much.
On your own machine, a lot is already working quietly in the background. Your environment variables are there. Your database is running. Your ports are familiar. Your keys are configured.
A fresh server has none of that.
The code moves over, but the environment that made the code work locally does not.
Once I stopped asking, "Why is this breaking?" and started asking, "What did my local machine make easy that this server does not have yet?" — the whole process became easier to understand.
That gap between building something and making it production-ready is easy to underestimate. It is also one of the most important transitions a developer can learn to navigate.



