

That’s wishful thinking. Users don’t give a shit as long as the problem goes away without having to lift a finger.
I take my shitposts very seriously.
That’s wishful thinking. Users don’t give a shit as long as the problem goes away without having to lift a finger.
They have a half-assed solution without a problem. The next logical step is to create a problem.
I’ve never used the AIO image. I’ve heard it’s weird. This is my compose file for the community image:
volumes:
db:
services:
db:
image: mariadb:10.6
restart: always
command: --transaction-isolation=READ-COMMITTED --log-bin=binlog --binlog-format=ROW
volumes:
- db:/var/lib/mysql
secrets:
- mysql_root_password
- mysql_nextcloud_password
environment:
- MYSQL_ROOT_PASSWORD_FILE=/run/secrets/mysql_root_password
- MYSQL_PASSWORD_FILE=/run/secrets/mysql_nextcloud_password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
nextcloud:
image: nextcloud
restart: always
ports:
- 8080:80
depends_on:
- db
links:
- db
volumes:
- /var/www/html:/var/www/html
- /srv/data:/srv/data
secrets:
- mysql_nextcloud_password
environment:
- MYSQL_PASSWORD_FILE=/run/secrets/mysql_nextcloud_password
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
- MYSQL_HOST=db
secrets:
mysql_root_password:
file: ./secrets/mysql_root_password.txt
mysql_nextcloud_password:
file: ./secrets/mysql_nextcloud_password.txt
You can access it on port 8080 and perform the initial setup manually. For the database server address, use the db
hostname. You’ll have to use a reverse proxy for HTTPS.
You could also try OpenCloud, which is a Go rewrite of ownCloud.
No. I’m so bloody fed up with AI “search” solutions that return everything on the fucking planet except what I want. Text search has been a solved problem for a decade. All I want out of a search engine is to be deterministic, stable, and reliable, and to look in titles, descriptions, and keywords. Vibe processing is completely unnecessary and will only create issues.
If you really want to iNnoVAte, then consider creating an index with transcripts and summaries that users can search by keywords.
As long as you’re not behind CGNAT, you can use a dynamic DNS provider (like duckdns.org) and its web API to keep a record pointed at your IP. If you’re behind CGNAT, Tailscale also has a service (Tailscale Funnel) that can expose an internal service to the internet.
You could also pay for a small VPS with a static IP, and set up a Wireguard tunnel to your home server and an HTTPS proxy to forward traffic through the tunnel.
Also, just in general, use Tailscale. It’s serious black magic fuckery on the firewall.
I currently run Nextcloud inside a Debian 11 LXC container on Proxmox, together with Apache, Mariadb, and PHP. I followed this guide. Once Apache and PHP were running, the rest of the process was straightforward.
Take a look at this list: https://networkupstools.org/stable-hcl.html
I use an older APC Back-UPS 500 to power my homelab and all network devices. So far it’s saved me from 3 power outages, and can last about 30 minutes with a 50W power draw. It doesn’t have data connections of its own (newer devices do), so I had to improvise with an ESP32 board that reports if it detects a voltage on the beeper, plus some cron jobs on Proxmox.
I simply use Nextcloud to sync the vault directory. It has clients for both desktop and mobile and works perfectly fine. I use it to sync basically everything between my work, home, laptop, and mobile.
The only drawback is that I don’t know if Obsidian automatically reloads a file if it is changed - if not, and you leave the file open in the editor, you might accidentally overwrite the new file with old data.
“Archiving legally purchased content as an insurance against corporate-sanctioned theft”?
I just simply set up a script to export my Trilium notes
edit the notes with an external editor, and then you can just re-import the note
Those two lines right there.
I value interoperability between software. Using a container format to store plaintext files and metadata introduces an XKCD 927 situation where it’s just another reinvention of the wheel that requires additional software support or a whole other workflow for no real benefit. Why is it necessary, for example, to store plaintext data and the related hierarchical structure in a container format when the same feature is already present in the filesystem with files and directories? It adds unnecessary complexity, roadblocks, and points of failure.
I’m using QOwnNotes at the moment. If I want to edit a note, for example, using neovim through SSH, all I need to do is navigate to the markdown file and open it. No scripts, no export/import. Only text files, and that is all it ever needs to be.
I’m in the same position, and it feels so damn powerful. I’ve convinced an entire university to ditch Ubuntu in favor of Linux Mint, and I’m also advocating for replacing our aging VMWare servers (with a soon-to-expire license) with Proxmox.
Damn, I had no idea netcat
had a hardware implementation
I haven’t tried, but you might be able to set up a samba share that points to /var/www/nextcloud-data/USER/files
, just make sure that it uses the www-data
user.
deleted by creator
At some point, you have to compromise.
Debian, all the way. I’ve got both ubuntu (made by my predecessor) and debian servers at work, and as far as maintenance and administration, they’re more or less identical. The one thing that sometimes catches me off-guard is that sudo is not installed by default, and you have to su -
into a root session.
Wireguard
You mean Wireshark? It’s possible. You might even capture the DHCP exchange.
The two best programs for the job are nmap
and arp-scan
.
Nmap is like ping on steroids. You can use it for network discovery, port scanning, fingerprinting, and basic pentesting. As long as the pi can talk to the computer, nmap
will sniff it out.
ARP-scan works on the data link layer to identify hosts using ARP. It should be able to return the IP address of all ethernet devices even if they end up in different subnets. It took me a little over two minutes to scan a /16 subnet with one retry and 0.1 second timeout.
If you are really concerned about the pi’s address, you should run a local DHCP server on the laptop. dnsmasq
for Linux and Mac, but I have no idea what to use on Windows (other than a VM bridged to the ethernet interface).
What does oVirt offer that proxmox doesn’t? I’m asking because I want to move an ESXi server to another hypervisor, I’m 90% sure it’ll be Proxmox, but I’d like to know my options.
The minimum spec is whatever e-waste you can find that still powers on.
My home server has an i3-4160, 10 gigabytes of mis-matched RAM, a ten-year-old 240 GB SSD with 36000 hours on it, and three 1 TB hard drives in a RAID5 array each with ~25000 power-on hours. It runs Proxmox on the metal with a virtualized OPNsense, Nextcloud, and Jellyfin server (plus smaller services). Jank levels are high, but not fatal, and it was mostly free.