Run “Real” Databases Locally Without Admin Rights
The One Pattern That Solves 90% of “No Admin Rights” Database Problems
PostgreSQL first — then the same repeatable pattern for MySQL/MariaDB, MongoDB, Redis, SQLite, and DuckDB (all in one article).
You’re on a machine where sudo is treated like plutonium. Or in a Windows machine where you have no admin rights.
Maybe it’s a corporate laptop with “security.”
Maybe it’s a university server where the admins have seen too much.
Maybe it’s an HPC cluster where your job is to compute things, not negotiate privileges.
You still want a database.
Not a pretend one. A real one: localhost, users, passwords, migrations, logs, restartable, disposable, reproducible.
This is the guide I wish I had: one article, one mental model, and for multiple databases!
The One Pattern That Solves 90% of “No Admin Rights” Database Problems
Most database install tutorials assume:
- you can install system packages (and thus have sudo/admin rights on your machine),
- you can write to
/etcand/var/lib, - you can start a system service.
We’ll do none of that and assume you have no sudo/admin rights on your machine.
Instead, we’ll use the “project-contained DB” pattern:
- Create an isolated environment (
Condahere, but any isolation works) - Install DB binaries inside it
- Create a local data directory (inside your project/home)
- Initialize the database (if it needs it)
- Start the server bound to 127.0.0.1 (log to a file)
- Create a least-privilege user + app database
- Connect from your app
- Stop cleanly
Once you understand this, every database becomes a variation of the same recipe.
A Clean Local Folder Layout (recommended)
Keep your database files away from your code, but still inside your project.
mkdir -p db/{pgdata,logs}Later we’ll add other DBs here:
- db/pgdata (Postgres data)
- db/mysqldata (MariaDB/MySQL data)
- db/mongodata (MongoDB data)
- db/redisdata (Redis persistence, optional)
- db/app.sqlite3 (SQLite file)
- db/analytics.duckdb (DuckDB file)
- db/logs/* (logs for everything)
Your whole DB universe becomes a folder you can zip, delete, move, or reproduce.
Part 1 — PostgreSQL: a full local server, no sudo/admin rights
0) Install Conda
Following the instructions here:
In MacOs/Linux:
mkdir -p ~/miniconda3
curl https://repo.anaconda.com/miniconda/Miniconda3-latest-MacOSX-arm64.sh -o ~/miniconda3/miniconda.sh
bash ~/miniconda3/miniconda.sh -b -u -p ~/miniconda3
rm ~/miniconda3/miniconda.sh
# activate and initialize conda:
source ~/miniconda3/bin/activate
conda init --all # or leave away --all if only for your current accountIn Windows PowerShell, first install Scoop following this:
Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser # install scoop first
Invoke-RestMethod -Uri https://get.scoop.sh | Invoke-Expressionscoop is good to have for all future installations in Windows as a developer. It is super useful. It makes installations in Windows more like Linux installations
Alternatively: You could activate WSL2 (Windows Subsystem of Linux 2), install Ubuntu into it, and then in the Ubuntu shell follow the Linux instructions in your PowerShell (see above!).
Using scoop, you can now install miniconda like this:
scoop bucket add extras # the bucket 'extras' is needed for miniconda!
scoop install minicondaAfter the installation of miniconda, initialize conda by:
conda init --all # or leave away --all if only for your current accountNow, with conda (in this case miniconda and not anaconda) installed,
you can follow the rest of the installations.
1) Create and activate a Conda environment
conda create -n myenv -y
conda activate myenvThis is your private sandbox. Anything installed here lives in your user space.
2) Install PostgreSQL into the environment
conda install -y -c conda-forge postgresqlThis gives you: postgres, initdb, pg_ctl, psql, createdb, createuser, etc.
3) Initialize a local Postgres “cluster”
initdb -D db/pgdataWhat this does: creates internal system catalogs and configuration under db/pgdata.
This is not your app database yet. It’s Postgres building its internal engine room.
4) Start Postgres and log to a file
pg_ctl -D db/pgdata -l db/logs/postgres.log startYou now have Postgres running as your user, bound to localhost by default.
5) Create a non-superuser (recommended)
createuser --encrypted --pwprompt mynonsuperuserThis is the user your app will use.
Keeping it non-superuser is a simple habit that prevents “why can my web app drop everything?” moments.
6) Create the actual application database
createdb --owner=mynonsuperuser myinner_dbNow you have:
- a running Postgres server
- a normal user with a password
- a database owned by that user
That’s the real setup.
7) Connect it to Django (example)
In settings.py:
DATABASES = {
"default": {
"ENGINE": "django.db.backends.postgresql",
"NAME": "myinner_db",
"USER": "mynonsuperuser",
"PASSWORD": "<mynonsuperuserpassword>",
"HOST": "localhost",
"PORT": "", # default
}
}Install the driver:
conda install -y -c anaconda psycopg2Run migrations:
python manage.py migrate
python manage.py createsuperuser --username name8) Stop Postgres cleanly (the civilized way)
pg_ctl -D db/pgdata stopYes, you can kill PIDs. But pg_ctl stop prevents half-written data and future regret.
Part 2 — Same pattern, different database dialects (copy/paste recipes)
The goal now is not to memorize flags.
The goal is to recognize the pattern:
- install → data dir → init → start → user/db → connect → stop
Below are “minimal viable” local commands that work in most environments. Some platforms vary (especially for MongoDB packaging), but the structure stays the same.
A) MariaDB / MySQL (the other major relational DB)
Why it belongs: if Postgres is the adult in the room, MariaDB/MySQL is the other adult who already lives in half the world’s web stacks.
Install
conda install -y -c conda-forge mariadbCreate directories:
mkdir -p db/mysqldata db/logsInitialize
Depending on the package/version, one of these is available:
mariadb-install-db --datadir=db/mysqldataor
mysql_install_db --datadir=db/mysqldataStart (bind locally, write socket locally)
mariadbd --datadir=db/mysqldata \
--socket=db/mysqldata/mysql.sock \
--port=3306 \
--bind-address=127.0.0.1 \
--log-error=db/logs/mariadb.log &Create user + database
Connect via the socket:
mariadb --socket=db/mysqldata/mysql.sockThen run:
CREATE DATABASE myinner_db;
CREATE USER 'mynonsuperuser'@'localhost' IDENTIFIED BY 'yourpassword';
GRANT ALL PRIVILEGES ON myinner_db.* TO 'mynonsuperuser'@'localhost';
FLUSH PRIVILEGES;Stop
If your package includes it:
mysqladmin --socket=db/mysqldata/mysql.sock shutdownOtherwise, stop by PID (last resort).
B) MongoDB (document database)
Why it belongs: it’s still the best-known document DB, and a lot of people need it for local development even if production is managed.
Install (two realistic paths)
- Best case:
conda-forgehas a compatible build on your platform. - Otherwise: download
MongoDBtarball into your home directory.
If conda works:
conda install -y -c conda-forge mongodbCreate directories:
mkdir -p db/mongodata db/logsStart
mongod --dbpath db/mongodata \
--logpath db/logs/mongo.log \
--bind_ip 127.0.0.1 \
--port 27017 &Create user + database (simple dev setup)
mongoshThen:
use myinner_db
db.createUser({
user: "mynonsuperuser",
pwd: "yourpassword",
roles: [{ role: "readWrite", db: "myinner_db" }]
})Stop
In mongosh:
use admin
db.shutdownServer()C) Redis (cache / key-value store)
Why it belongs: Redis is the quiet workhorse behind queues, caching, sessions, rate limiting, and a lot of “why is production fast but dev slow?” stories.
Install
conda install -y -c conda-forge redisCreate directories:
mkdir -p db/redisdata db/logsStart (local-only)
redis-server --bind 127.0.0.1 \
--port 6379 \
--dir ./db/redisdata \
--logfile ./db/logs/redis.log &Stop
redis-cli shutdownD) SQLite (relational, zero-server)
Why it belongs: it’s the best database when you don’t need a server. Many projects should start here.
There is no init. No start. No stop.
Install (if needed)
conda install -y -c conda-forge sqliteCreate / open the DB file
sqlite3 db/app.sqlite3Then you can do:
CREATE TABLE test (id INTEGER PRIMARY KEY, name TEXT);SQLite is just a file. That’s the feature.
E) DuckDB (analytics database, insanely practical)
Why it belongs: it’s the “data science local powerhouse.” If you work with CSV/Parquet/DataFrames, this saves hours.
No server. Usually embedded in your Python/R process.
Install
conda install -y -c conda-forge duckdbUse it from Python
import duckdb
con = duckdb.connect("db/analytics.duckdb")
con.execute("CREATE TABLE t AS SELECT 1 AS x")
print(con.execute("SELECT * FROM t").fetchall())Use it from R
install.packages(c("DBI", "duckdb"))
library(DBI)
library(duckdb)
# Persistent database file (lives in your project)
con <- dbConnect(duckdb::duckdb(), dbdir = "db/analytics.duckdb")
# Create / query
dbExecute(con, "CREATE TABLE IF NOT EXISTS t (x INTEGER)")
dbExecute(con, "INSERT INTO t VALUES (1), (2), (3)")
dbGetQuery(con, "SELECT * FROM t WHERE x >= 2")
# Always disconnect (and shut down the embedded engine cleanly)
dbDisconnect(con, shutdown = TRUE)# or using dplyr
install.packages(c("DBI", "duckdb", "dplyr"))
library(DBI)
library(duckdb)
library(dplyr)
con <- dbConnect(duckdb::duckdb(), dbdir = "db/analytics.duckdb")
# Treat a file like a table, then dplyr it
res <- tbl(con, sql("SELECT * FROM read_csv_auto('data/sales.csv')")) %>%
filter(amount > 0) %>%
group_by(customer_id) %>%
summarise(total = sum(amount), .groups = "drop") %>%
arrange(desc(total)) %>%
head(10) %>%
collect()
print(res)
dbDisconnect(con, shutdown = TRUE)That’s it. You now have a persistent analytics DB file.
Part 3 — What’s actually happening (the “competent friend” explanation)
You just ran a lot of commands. Here’s the mental model that makes it all feel obvious.
1) “Install” does not mean “install into the OS”
With Conda (or any isolated env), binaries go into a directory you own.
That’s why you don’t need admin rights.
2) The data directory is the database
For server databases (Postgres, MariaDB, MongoDB), the data directory contains:
- system tables / metadata
- your user databases
- logs (if configured there)
- internal state
If you delete it, you reset everything.
That’s brutal and also extremely convenient.
3) “Initialize” means “create the internal engine room”
- Postgres:
initdb - MariaDB/MySQL:
mariadb-install-db/mysql_install_db - MongoDB: no explicit init; it creates files on first start
- Redis: no init; optional persistence
- SQLite/DuckDB: file-based; “init” is creating/opening the file
4) “Start the server” means “run a normal process”
No system daemon required.
pg_ctl startstarts postgresmariadbdstarts the MySQL-compatible servermongodstarts MongoDBredis-serverstarts Redis
5) “Bind to localhost” is your safety belt
127.0.0.1 means: only the local machine can connect.
This is the right default for local development, especially on shared machines.
6) Least-privilege users are not bureaucracy
They prevent accidents.
They also force you into the same shape you’ll want in production.
Troubleshooting Corner (the 5 classic problems)
1) “Port already in use”
Something is already listening on that port.
Fix: choose another port (e.g. 5433, 3307, 27018, 6380) or stop the other process.
2) “Permission denied” inside your data directory
Your data folder is somewhere you can’t write.
Fix: put it in your home folder or project folder you own.
3) “Authentication failed”
User/password mismatch or default auth method mismatch.
Fix: recreate the user, or verify the auth config (Postgres: pg_hba.conf inside db/pgdata).
4) “It worked yesterday, now it doesn’t”
Most often: leftover stale lock files or a server process still running.
Fix: stop cleanly; otherwise find the process and kill it, then restart.
5) “My app can’t connect”
Check the simple things:
- correct host (localhost)
- correct port
- correct database name
- correct credentials
- server actually running
Logs in db/logs/* are your best friend.
How to choose which local DB to run
- PostgreSQL: you want a serious relational DB and production-like behavior.
- MariaDB/MySQL: you need compatibility with MySQL-oriented stacks.
- MongoDB: you want document-oriented storage and flexible schemas.
- Redis: you need cache/queue/session store behavior.
- SQLite: you want SQL with zero running services.
- DuckDB: you want local analytics over files/dataframes.
The Real Win: Databases Become Project Dependencies
This approach is not only a workaround for locked-down machines. It’s a better workflow in general:
- Everything is reproducible
- Everything is contained
- Reset means deleting
db/ - Local dev is closer to production
- You stop negotiating permissions and start building
You also get a portable teaching tool: one pattern, multiple databases, fewer hours wasted.
A Small Start/Stop Script (for Those in Unix Machines or Windows WSL2 Ubuntu)
Create dbctl.sh in your project root:
#!/usr/bin/env bash
set -euo pipefail
cmd="${1:-help}"
case "$cmd" in
pg-init)
initdb -D db/pgdata
;;
pg-start)
pg_ctl -D db/pgdata -l db/logs/postgres.log start
;;
pg-stop)
pg_ctl -D db/pgdata stop
;;
redis-start)
redis-server --bind 127.0.0.1 --port 6379 \
--dir ./db/redisdata --logfile ./db/logs/redis.log &
;;
redis-stop)
redis-cli shutdown
;;
mongo-start)
mongod --dbpath db/mongodata --logpath db/logs/mongo.log \
--bind_ip 127.0.0.1 --port 27017 &
;;
mongo-stop)
mongosh --eval "use admin; db.shutdownServer()"
;;
*)
echo "Usage:"
echo " ./dbctl.sh pg-init|pg-start|pg-stop"
echo " ./dbctl.sh redis-start|redis-stop"
echo " ./dbctl.sh mongo-start|mongo-stop"
;;
esacMake it executable:
chmod +x dbctl.shNow you can do:
./dbctl.sh pg-start
./dbctl.sh pg-stopNo remembering the exact pg_ctl incantations.