

Im struggling to find it, but theres like an “AI tarpit” that causes scrapers to get stuck. something like that? Im sure I saw it posted on lemmy recently. hopefully someone can link it
Im struggling to find it, but theres like an “AI tarpit” that causes scrapers to get stuck. something like that? Im sure I saw it posted on lemmy recently. hopefully someone can link it
101 of log files
is to configure it yourself
the crease will appear within days
price isnt an issue and looking for the most premium
?? so you dont want the most premium?
steamdeck
Ive not shared mine to the point anyone can arbitrarily get media. They have to ask. And I can always say “Oh its not available on my sources” or whatever
Unfortunately not in this example. It has the compose to spin up masto, and the variables to set to tell it where redis etc is
You will need to review all the required variables and configure as you require. But basically, yeah
EDIT - NO
its not just grab and run. From the docs,
This container requires separate postgres and redis instances to run.
If youre looking for a sample docker-compose,
Some things, or points, to consider.
good luck have fun!
NPM as in nginx and not Node Package Manager?
When you said Jellyfin streaming isn’t working - are you able to actually get to Jellyfin UI and its the stream failing, or you can’t access Jellyfin at all via nginx?
It depends whether a whole season torrent exists or not. If sonarr can identify one thats a whole season, it should download that when you search at season level. If youve searched individual episode at a time, youll get a single one.
You can do an interactive search and iirc specify full season during that search
Check on activity page to see if its stuck on found/downloading/extracting/importing
Check trackers/sources aren’t down
Check in log.txt for exceptions
When you tried caddy and received an error, that looks like you are getting the wrong image name.
Then you mentioned deleting caddyfile as the configuration didn’t work. But, if I am following correctly the caddyfile wouldn’t yet be relevant if the caddy container hadn’t actually ran.
Pulling from Caddys docs, you should just need to run
$ docker run -d -p 80:80 \
-v $PWD/Caddyfile:/etc/caddy/Caddyfile \
-v caddy_data:/data \
caddy
Where $PWD is the current directory the terminal is currently in.
Further docs for then configuring for HTTPs you can find here under
Automatic TLS with the Caddy image
made things slow
That’s probably referring to how file systems are handled. Going from WSL to windows file system is slower than using the “proper” mount point
Unrestricted
yes
According to this you can clone an app to have multiple instances of it
https://www.makeuseof.com/tag/run-multiple-app-copies-android/
Something like this?
https://docs.linuxserver.io/general/swag
and add authentication to private services
Install an OS on the card to boot from? Its the same process as making a bootable live USB stick.
The performance will be poor in comparison to an SSD and will reduce the longevity of the card due to many r/w operations.
yes i did read OP.
ed. i see this was downvoted without a response. But il put this out there anyway.
If you host a public site, which you expect anyone can access, there is very little you can do to exclude an AI scraper specifically.
Hosting your own site for personal use? IP blocks etc will prevent scraping.
But how do you identify legitimate users from scrapers? Its very difficult.
They will use your traffic up either way. Dont want that? You could waste their time (tarpit), or take your hosting away from public access.
Downvoter. Whats your alternative?