HomeLab

My adventures

Overview

Overview

The Setup

T710

  • PVE
    • ProxMox
    • Dual L5640 (hexacore)
    • 48GB RAM
    • Dual embedded Broadcom® NetXtreme® II 5709c Gigabit Ethernet NIC with failover and load balancing (4 total ports
    • Single Gigabit Adapter U3867
    • PERC H700
      • (4) 250GB Raid10
        • VM Storage
      • (4) 2TB Raid10
        • Bulk data and backups

SSDNodes X-Large+

  • Gummi0
    • Centos7
    • 4 Xeon cores
    • 24GB RAM
    • 120GB SSD
Overview

Servers

PVE

  • ProxMox
    • Port 8006
Sora
  • Server 2019
    • AD
    • DNS
      • This uses a Split-Zone
      • https://social.technet.microsoft.com/Forums/ie/en-US/4d97325b-ff3a-4f46-ba6e-dc3f4ff978e1/dns-internal-domain-has-same-name-as-external-website?forum=winserverNIS
      • http://www.itgeared.com/articles/1005-active-directory-domain-name/
Kairi
  • Server 2019
    • AD
    • DNS
    • DHCP
Riku
  • Centos7
    • FOG
      • :443/fog
Ansem
  • Centos7
    • Plex
Larxene
  • Freenas
    • NFS and SMB Shares
Demyx
  • Centos7
    • Ansible
Xaldin
  • Centos7
    • Docker
Axel
  • Centos7
    • Graylog
Xion
  • Centos7
    • ProtonMail Bridge
Rex
  • Centos7
    • ARK
Cerberus
  • Centos7
    • NGINX reverse proxy
Overview

Containers

Nextcloud

  • Nginx Proxy
    • jwilder/nginx-proxy
      • This functions as a reverse proxy for all other containers. It can pass any container, making it talk on 443. It applies SSL supplied by the companion container.
  • LetsEncrypt Certs
    • jrcs/letsencrypt-nginx-proxy-companion
      • This container will seek out a cert for all hostnames declared in containers, taking all of the hassle out of managing certs. The containers maintains these certs as well, and presents them to the proxy for use.
  • Nextcloud
    • nextcloud
      • Dockerbuild cron into the image for cronjobs to work properly
      • This container uses a maria container as a DB.
  • Maria
    • mariadb
      • This container is spun up for all containers needed a DB, it functions as a MYSQL database. I use one container for each database, no containers are shared across applications.
  • Apache web server
    • httpd
      • An apache server, It offers up my base page, a few files, and doesnt do much else.
  • Smashing Dashboards
    • visibilityspots/smashing
      • A smashing dashboard, I use it as an alarm clock cause my real one broke.
  • Stikked
    • viranch/stikked
      • A pastebin alternative, this is a novelty to me but I like it. It is ephemeral, I didnt bother to make any volumes for it and all pastes expire after a max of an hour. This container needs a mariadb backend to hold its data.
  • BookStack
    • solidnerd/bookstack
      • What do you think this is for.... It needs a mariadb backend and a few hacks to keep it together, like volumes mounted to hack .env into place.
  • BitWarden_RS
    • mprasil/bitwarden
      • Great password manager, self hosted. This is the rust image that takes much fewer resources than the official soltion
  • Gitea
    • gitea/gitea
      • Self hosted git solution that can use a SQL backend of included sqlite. It supports LDAP and 2FA.
  • Organizr
    • lsiocommunity/organizr
      • A central dashboard or hub for acessing all your self hosted things in one place. It just iframes sites so it cannot pass internel resources or show pages that are more secure by default.
Overview

Maps

HomeLab.png

Configuration

Configuration

Cent7 Base

Packages

realmd
sssd
oddjob
oddjob-mkhomedir
sssd
adcli
samba-common-tools

vim
iptables-services

IPTables

/etc/sysconfig/iptables

*filter
:INPUT DROP [19:2321]
:FORWARD ACCEPT [0:0]
:OUTPUT DROP [2:116]
-N SSHATTACK
-A SSHATTACK -j LOG --log-prefix "Possible SSH attack! " --log-level 7
-A SSHATTACK -j DROP
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --dport 22 --state NEW -m recent --set
-A INPUT -p tcp -m state --dport 22 --state NEW -m recent --update --seconds 120 --hitcount 4 -j SSHATTACK
-A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 636 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 389 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 123 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 53 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 22 -j ACCEPT
COMMIT
Configuration

Server 2019

https://www.microsoft.com/en-us/software-download/windowsinsiderpreviewserver

Datacenter Edition 6XBNX-4JQGW-QX6QG-74P76-72V67

Standard Edition MFY9F-XBN2F-TYFMP-CCV49-RMYVH

https://blogs.windows.com/windowsexperience/2018/08/28/announcing-windows-server-2019-insider-preview-build-17744/

Configuration

UTM

Setup

MTU

MTU is an issue for Metrocast, they assign a DHCP value of 576 while 1500 is actually needed. To fix that you need to do the folowing:

In /var/chroot-dhcpc/etc there is a file named: default.conf

default.conf

interface "[<INTERFACE>]" {
timeout 20;
retry 60;
script "/usr/sbin/dhcp_updown.plx";
request subnet-mask, broadcast-address, time-offset,
routers, domain-name, domain-name-servers, host-name,
domain-search, nis-domain, nis-servers,
ntp-servers, interface-mtu;
[<HOSTNAME>]
}

"interface-mtu" : If you remove that (not the following ;!!!), and take your interface down/up, your MTU is possible to edit by hand again in the GUI.

AND ... it will use the number you give it, not the dumb MTU value one of your ISP's let be in their equipment because they did not bother to change it.

Licensing

If you do not provide a home license at install, you will need to modify the system to add it later. Prior to 30 days you can do this via SSH, after 30 days you will need to boot an alternate OS so you can interact with the file system. Just remove /etc/asg

Auth

I was stupid, the install was easy and the auth server was straight forward. There is no need for SSO configuration.

Add the login group under WebAdmin Settings. If you need help: https://community.sophos.com/kb/en-us/120348

html5 remote access

Allow the user portal

  • Configure the User Portal.

From Management > User Portal > Global, click on the folder beside ‘Allowed networks’ then drag ‘Any’ into the box. You may want to restrict this more, but it’s likely you will have people both inside and outside your firewall who will want to access the User Portal.

  • https://community.sophos.com/kb/en-us/115305

The portal was easy to set up, you need to use NLA auth for RDP and set a login. I defined portals by user instead of group because of this.

  • https://community.sophos.com/kb/en-us/117470

  • https://community.sophos.com/kb/en-us/115157

html5 sites

Again, simple. Ports are under the advanced tab. Dont forget to run the cert finder, the last box in the addition menu.

SSL VPN

This is nifty as hell, it is just an implementation of OpenVPN.

  • https://www.sophos.com/en-us/medialibrary/PDFs/documentation/utm90_Remote_Access_Via_SSL_geng.pdf

The configs are generated and served on the remote access site

Passthrough

It is a bitch.

https://pve.proxmox.com/wiki/Pci_passthrough

https://forum.proxmox.com/threads/dell-poweredge-r710-ethernet-passthrough-issues.44097/

Bridge

I ended up bridging across a new NIC using the default eIntel card. I cloned the MAC of the TPLINK and rebooted the modem to get an IP, I am not sure if those are necesary.

I reinstalled the OS to properly go throug the internet set up dialogue. I set the admin network on .254 of the LAN NIC.

During setup I changed the IP to 10.1 and everything worked fine, on the LAN. Running through the TP link fucked shit up.

DNS

To allow UTM to resolve host names I needed to add Sora as a forwarder, and set internal network to use it. Those are the first and second tabs of DNS under Network Services.

use "DNS Request Route" to forward "domain.local" to AD DNS server.

https://community.sophos.com/products/xg-firewall/f/network-and-routing/74728/internal-dns-not-resolving-local-dns-names

DynDNS

https://community.sophos.com/kb/en-us/127039

Configuration

FOG

Setup

Basic guide (the official docs are the same, this is just condensed): http://khmerdigital.net/2017/12/11/how-to-install-fog-in-centos-7/

  1. Prepare The System

Configure firewalld

yum install firewalld -y
systemctl start firewalld
systemctl enable firewalld 
for service in http https tftp ftp mysql nfs mountd rpc-bind proxy-dhcp samba; do firewall-cmd --permanent --zone=public --add-service=$service; 
done

echo "Open UDP port 49152 through 65532, the possible used ports for fog multicast" 
firewall-cmd --permanent --add-port=49152-65532/udp
echo "Allow IGMP traffic for multicast"
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p igmp -j ACCEPT
systemctl restart firewalld.service
echo "Done."   

Add firewalld exceptions for DHCP and DNS (In case, you are going to run DHCP in FOG server)

for service in dhcp dns; do firewall-cmd --permanent --zone=public --add-service=$service; done
firewall-cmd --reload
echo Additional firewalld config done.
Set SELinux to permissive on boot
sed -i.bak 's/^.*\SELINUX=enforcing\b.*$/SELINUX=permissive/' /etc/selinux/config  
  1. Setup FOG

To install latest version of the FOG by using git, it can be found here.

yum install git -y
cd ~
mkdir git
cd git
git clone https://github.com/FOGProject/fogproject.git
cd fogproject/bin
./installfog.sh
echo "Congratulation! FOG is installed in your server"

Settings

Here are the settings FOG will use:

  • Base Linux: Redhat

  • Detected Linux Distribution: CentOS Linux

  • Server IP Address:######

  • Server Subnet Mask:######

  • Interface: eth0

  • Installation Type: Normal Server

  • Internationalization: 0

  • Image Storage Location: /images

  • Using FOG DHCP: No

  • DHCP will NOT be setup but you must setup your

  • current DHCP server to use FOG for PXE services.

  • On a Linux DHCP server you must set: next-server and filename

  • On a Windows DHCP server you must set options 066 and 067

  • Option 066/next-server is the IP of the FOG Server:

  • Option 067/filename is the bootfile: (e.g. undionly.kpxe)

If you would like to backup your FOG database you can do so using MySQL Administrator or by running the following command in a terminal window (Applications->System Tools->Terminal), this will save the backup in your home directory.

mysqldump --allow-keywords -x -v fog > fogbackup.sql

Imaging

Linux

Boot linux, register it with FOG then click capture twice under hosts. The first will make a image association and the second will schedule the capture.

https://wiki.fogproject.org/wiki/index.php?title=Booting_into_FOG_and_Capturing_your_first_Image

Windows

Changes

I added in a seperate disk for images. You just mount it to /images.

https://wiki.fogproject.org/wiki/index.php?title=Adding_Storage_to_a_FOG_Server

Issues

FOG isnt connecting if IPTABLES are in place, regardless of rules

  • I ran the firewalld things instead, it works. I will deal with that for this host.

FOG isnt responding to a hostanme, only IP, in browser

  • I think this is a browser issue, IE loaded it fine.

FOG was failing during capture, this was because I made the image resizeable disk, instead of non-resizable disk

Maintenance

Database Maintenance Commands

Sometimes, a host will be created with an ID of 0 (zero). Sometimes, there are MAC addresses that loose their association with a host, and in a sense become orphaned. Sometimes there are tasks that just need cleared out. Sometimes there are hosts without MACs. Sometimes groups of ID 0 get made, sometimes snapins of ID 0 get made. Sometimes snapins are associated with hosts that don't exist anymore. Other things go wrong sometimes. These things cause problems with FOG's operation and need cleared out in order to have a clean & healthy database. The below commands are intended to run on FOG 1.3, 1.4, and 1.5 series, they will clear these problems for you. This also fixes problems with multicast occasionally, where the partclone screen just sits there doing nothing.

https://wiki.fogproject.org/wiki/index.php/Troubleshoot_MySQL

# No password:
mysql -D fog

# The following chunk of commands will clean out most problems and are safe:
DELETE FROM `hosts` WHERE `hostID` = '0';
DELETE FROM `hostMAC` WHERE hmID = '0' OR `hmHostID` = '0';
DELETE FROM `groupMembers` WHERE `gmID` = '0' OR `gmHostID` = '0' OR `gmGroupID` = '0';
DELETE FROM `snapinGroupAssoc` WHERE `sgaID` = '0' OR `sgaSnapinID` = '0' OR `sgaStorageGroupID` = '0';
DELETE FROM `snapinAssoc` WHERE `saID` = '0' OR `saHostID` = '0' OR `saSnapinID` = '0';
DELETE FROM `hosts` WHERE `hostID` NOT IN (SELECT `hmHostID` FROM `hostMAC` WHERE `hmPrimary` = '1');
DELETE FROM `hosts` WHERE `hostID` NOT IN (SELECT `hmHostID` FROM `hostMAC`);
DELETE FROM `hostMAC` WHERE `hmhostID` NOT IN (SELECT `hostID` FROM `hosts`);
DELETE FROM `snapinAssoc` WHERE `saHostID` NOT IN (SELECT `hostID` FROM `hosts`);
DELETE FROM `groupMembers` WHERE `gmHostID` NOT IN (SELECT `hostID` FROM `hosts`);
DELETE FROM `tasks` WHERE `taskStateID` IN ("1","2","3");
DELETE FROM `snapinTasks` WHERE `stState` in ("1","2","3");
TRUNCATE TABLE multicastSessions; 
TRUNCATE TABLE multicastSessionsAssoc; 
DELETE FROM tasks WHERE taskTypeId=8;

# This one clears the history table which can get pretty large:
TRUNCATE TABLE history;
   
# This one will clear the userTracking table, This table is where user login/logout (for host computers, not the fog server) is stored. This table can also get pretty large.
TRUNCATE TABLE userTracking;
    
quit
Configuration

Gummi0

Gummi0 has realmd installed, along with SSSD and samba-common-tools then rebooted to allow realmd to work. I also dropped the base VPN conf inplace. I modified the resolv.conf to use sora first, and cloudflare second.

Backup script

ssh user@remote tar cpzf - /opt/Docker/ > /local/foo/docker_backup_$(date +%F)
Configuration

TheLounge Simplification

Edit the below file and remove the below sections to remove the password field + realname fields from TheLounge when used in a public setting

sudo -E vim public/js/bundle.js

<div class="col-sm-3"><label for="connect:password">Password</label></div><div class="col-sm-9 password-container"><input class="input" id="connect:password" type="password" name="password" value="'+c(r(null!=(s=null!=n?n.defaults:n)?s.password:s,n))+'" maxlength="512"> '+(null!=(s=e.invokePartial(a(19),n,{name:"../reveal-password",data:o,helpers:t,partials:i,decorators:e.decorators}))?s:"")+' </div>
<div class="col-sm-3"><label for="connect:realname">Real name</label></div><div class="col-sm-9"><input class="input" id="connect:realname" name="realname" value="'+c(r(null!=(s=null!=n?n.defaults:n)?s.realname:s,n))+'" maxlength="512"></div>

In the same file find The Lounge - and replace it with TechSupport then edit your config and make the NetworkName Livechat

Find <h2>User preferences<h2> and replace that with

<h2>How to use the chat:</h2><h3>1. <a href="https://www.dontasktoask.com/" target="_blank">Dont ask to ask.</a> Just state your question to the channel<br>2. For verbose descriptions use a<a href="https://pastebin.com/" target="_blank"> pastebin site</a> or a link to your reddit post<br>3. Be patient, helpers have lives and are not paid to watch chat, if someone knows they will answer your question in due time</h3>
Configuration

PHPServerMonitor Public Status Page

In file phpservermon/src/psm/Service/User.php below line 97 $this->session = $session; add

$kl = "";
$kl = isset($_GET["kl"]) ? $_GET["kl"] : '';
if ($kl == "public"){
$user_id = 3;
$this->setUserLoggedIn($user_id, true);
$user = $this->getUser($user_id);
$this->newRememberMeCookie();
}

In file phpservermon/src/psm/Module/AbstractController.php edit line 274 to remove functions you do not want public to have.

  • To use auth on a web page you need the php-mcrypt package
Configuration

NetData

netdata is a fun and simple performance graph for servers. I have this installed on all of my servers.

I centralize the viewing of these metrics by creating several large dashboard pages and hosting them on cerberus http://cerberus.dev0.sh/master.html and http://cerberus.dev0.sh/docker.html

I create these via scripts located here https://git.dev0.sh/piper/netdata_dashboards, the repo has information on the scripts.

To host the graphs I replaced the html directory being hosted by nginx with the /usr/share/netdata/web directory then dropped my html files inside. You can also just make a site for it or use a symlink, it doesnt really matter.

Lessons

Lessons

The Plex Saga

I finally got plex working again, it was an ordeal. I had to stop running it in docker because of how many files it opened. I instead installed it straight onto my dallas node. That wasnt really an issue, just download the rpm from "Plex Download" then install it.

The issue is how to activate it, I needed to turn off cert auth to my server and initiate a putty connection, with a SSH tunnel from 8888 to 32400. I had to be local to initiate the connection. After that, I just added libraries and all was golden.

Port 32400 is needed to get through the firewall, regardless

Plex file limit:

/etc/security/limits.d/plex.conf


Finding Movies

~Plex is having issues finding all of my series, I am having issues tracking down why. I am not looking at logs for the scanner and its analysis though. They are in /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Logs~

To fix this I made many more libraries, for show genres

Lessons

BookStack

da gist

This took a while to get going, mostly because of LDAP. For this, I used the first launch to write my env file then I mounted that internally. I had to rewrite the LDAP user field, because the $ was being left out. The next launch would likely be simpler by just writing the env file by hand.

https://blog.rylander.io/2017/06/09/install-and-configure-bookstack-using-docker-and-ldap/

https://gist.github.com/mry/89a93f4f777a277e3b6039972823ca9a

These include the necessary environmental variables I didnt find elsewhere

You may find that you cannot log in with your initial Admin account after changing the AUTH_METHOD to ldap. To get around this set the AUTH_METHOD to standard, login with your admin account then change it back to ldap. You get then edit your profile and add your LDAP uid under the ‘External Authentication ID’ field. You will then be able to login in with that ID.

$$

https://github.com/BookStackApp/BookStack/issues/414

Specific to running BookStack inside a Docker container and populating the environment variables.

For Feature Requests Update documentation to either include a clarifying note or include setting in example.

use double $$ since we are using docker

  • LDAP_USER_FILTER=(&(sAMAccountName=$${user})) Expected Behavior The PHP variable is expanded within BookStack and the LDAP query matches as expected.

Actual Behavior With the single $ sign, the expression becomes a static string and the query will never return a > result and no error message is thrown. Can be verified by replacing with a proper username.

Just a note. I had to map my .env file as a volume, and then manually edit this field to "LDAP_USER_FILTER=(&(sAMAccountName=${user}))". Even with this $$ trick that I found elsewhere it seems like the file was written with strange characters or just no $'s at all.

Lessons

Raincube plugins

These were fun to learn. I am running Raincube on docker mostly for the experience and webdav abilities. The dav plugins are installed manually. I first mapped a volume to /var/www/html/plugins and ran a git clone for both utilities into that dir.

The plugins in question are carddav and caldav

I cloned into the volumes from the docker host, and then owned them to the same as the rest of the mounted container.

git clone https://github.com/blind-coder/rcmcarddav.git carddav

Once cloned, you need to get it installed. It uses php composer, which I think needs to be built in each plugin. I docker exec'd into the container and ran the rest from inside the container.

docker exec -it cube /bin/bash
cd /var/www/html/plugins/carddav
php -r "copy('https://getcomposer.org/installer', 'composer-setup.php');"
php -r "if (hash_file('SHA384', 'composer-setup.php') === '93b54496392c062774670ac18b134c3b3a95e5a5e5c8f1a9f115f203b75bf9a129d5daa8ba6a13e2cc8a1da0806388a8') { echo 'Installer verified'; } else { echo 'Installer corrupt'; unlink('composer-setup.php'); } echo PHP_EOL;"
php composer-setup.php
php -r "unlink('composer-setup.php');"

Once installed, execute it.

php composer.phar install

Do this for each, and reboot the container after to load them properly.

Lessons

Koel

Getting Koel up and running is giving me many issues. I orginally used the image by Binhex, I use his deluge image. The Koel is a self contained nginx, koel, mysql image. It doesnt log in though, it flashes and then just stops working. I decided to stand up my own image. I worked through a couple of images but decided to use the alpine based official "composer" image.

Alpine was fun to learn, it uses the APK package manager. Apps are added with apk add foo

I had many issues, the first being with "ext-exif" not being found. Apparently docker has a strange way to handle php extentions docker-php-ext-install exif I had to add several more dependencies to this once I found that the data base could not be contacted docker-php-ext-install exif mysqli pdo pdo_mysql.

More things were needed once I started running koel, notable ffmpeg and yarn which can just be installed by their names.

Once working through the yarn dependency I am now stuck at:

ErrorException thrown with message "File /css/app.css not defined in asset manifest. (View: /app/resources/views/index.blade.php)"

Stacktrace:
#41 ErrorException in /app/app/Application.php:52
#40 InvalidArgumentException in /app/app/Application.php:52
#39 App\Application:rev in /app/vendor/laravel/framework/src/Illuminate/Support/Facades/Facade.php:223
#38 Illuminate\Support\Facades\Facade:__callStatic in /app/storage/framework/views/f7124c264b76245addca5c377e9c70547b2c83bf.php:21
#37 include in /app/vendor/laravel/framework/src/Illuminate/View/Engines/PhpEngine.php:43
#36 Illuminate\View\Engines\PhpEngine:evaluatePath in /app/vendor/laravel/framework/src/Illuminate/View/Engines/CompilerEngine.php:59
#35 Illuminate\View\Engines\CompilerEngine:get in /app/vendor/laravel/framework/src/Illuminate/View/View.php:142
#34 Illuminate\View\View:getContents in /app/vendor/laravel/framework/src/Illuminate/View/View.php:125
#33 Illuminate\View\View:renderContents in /app/vendor/laravel/framework/src/Illuminate/View/View.php:90
#32 Illuminate\View\View:render in /app/vendor/laravel/framework/src/Illuminate/Http/Response.php:42
#31 Illuminate\Http\Response:setContent in /app/vendor/symfony/http-foundation/Response.php:202
#30 Symfony\Component\HttpFoundation\Response:__construct in /app/vendor/laravel/framework/src/Illuminate/Routing/Router.php:747
#29 Illuminate\Routing\Router:toResponse in /app/vendor/laravel/framework/src/Illuminate/Routing/Router.php:719
#28 Illuminate\Routing\Router:prepareResponse in /app/vendor/laravel/framework/src/Illuminate/Routing/Router.php:679
#27 Illuminate\Routing\Router:Illuminate\Routing\{closure} in /app/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:30
#26 Illuminate\Routing\Pipeline:Illuminate\Routing\{closure} in /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:104
#25 Illuminate\Pipeline\Pipeline:then in /app/vendor/laravel/framework/src/Illuminate/Routing/Router.php:681
#24 Illuminate\Routing\Router:runRouteWithinStack in /app/vendor/laravel/framework/src/Illuminate/Routing/Router.php:656
#23 Illuminate\Routing\Router:runRoute in /app/vendor/laravel/framework/src/Illuminate/Routing/Router.php:622
#22 Illuminate\Routing\Router:dispatchToRoute in /app/vendor/laravel/framework/src/Illuminate/Routing/Router.php:611
#21 Illuminate\Routing\Router:dispatch in /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php:176
#20 Illuminate\Foundation\Http\Kernel:Illuminate\Foundation\Http\{closure} in /app/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:30
#19 Illuminate\Routing\Pipeline:Illuminate\Routing\{closure} in /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php:31
#18 Illuminate\Foundation\Http\Middleware\TransformsRequest:handle in /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:151
#17 Illuminate\Pipeline\Pipeline:Illuminate\Pipeline\{closure} in /app/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
#16 Illuminate\Routing\Pipeline:Illuminate\Routing\{closure} in /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/TransformsRequest.php:31
#15 Illuminate\Foundation\Http\Middleware\TransformsRequest:handle in /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:151
#14 Illuminate\Pipeline\Pipeline:Illuminate\Pipeline\{closure} in /app/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
#13 Illuminate\Routing\Pipeline:Illuminate\Routing\{closure} in /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/ValidatePostSize.php:27
#12 Illuminate\Foundation\Http\Middleware\ValidatePostSize:handle in /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:151
#11 Illuminate\Pipeline\Pipeline:Illuminate\Pipeline\{closure} in /app/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
#10 Illuminate\Routing\Pipeline:Illuminate\Routing\{closure} in /app/app/Http/Middleware/UseDifferentConfigIfE2E.php:22
#9 App\Http\Middleware\UseDifferentConfigIfE2E:handle in /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:151
#8 Illuminate\Pipeline\Pipeline:Illuminate\Pipeline\{closure} in /app/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
#7 Illuminate\Routing\Pipeline:Illuminate\Routing\{closure} in /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Middleware/CheckForMaintenanceMode.php:62
#6 Illuminate\Foundation\Http\Middleware\CheckForMaintenanceMode:handle in /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:151
#5 Illuminate\Pipeline\Pipeline:Illuminate\Pipeline\{closure} in /app/vendor/laravel/framework/src/Illuminate/Routing/Pipeline.php:53
#4 Illuminate\Routing\Pipeline:Illuminate\Routing\{closure} in /app/vendor/laravel/framework/src/Illuminate/Pipeline/Pipeline.php:104
#3 Illuminate\Pipeline\Pipeline:then in /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php:151
#2 Illuminate\Foundation\Http\Kernel:sendRequestThroughRouter in /app/vendor/laravel/framework/src/Illuminate/Foundation/Http/Kernel.php:116
#1 Illuminate\Foundation\Http\Kernel:handle in /app/index.php:52
#0 require_once in /app/server.php:19

Backups

Backups

The Plan

I want my backups to be based around a central server, without clients on each machine. With just Linux I would accomplish this with basic scripts. For my VPS I just want to clone /opt and send it off every backup cycle, with the magic of Server 2019 I plan to do this same thing. Server 2019 supports tar natively and runs an SSH server. I want to execute the server side backup commands on demand and pull that file to the backup server at every cycle. I do not care about the OS at all, just the data.

A central server will hold all of the

Backups

The basic script