My adventures





Patch Panel

Port 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
Dest 74656 74656 74656 74656 74656 AP AP AP L14 L13 L17 L18 L15 L16 L43
Port 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48
Dest L19 ControlStation
Src Switch SW
VLAN Legend
10 20 30 40 50
Purple Orange Blue Green Green/Virtual
Infra DMZ Pentesting Guest IoT


Port 01 03 05 07 09 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47
VLAN 10 20 30 40 10 10 30 40 10 20 30 40/50
Patch 1 2 3 4 8 9 10 11 14 15 16 43
Port 02 04 06 08 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48
VLAN 10 10
Patch pfsense 48



Cent7 Base






:INPUT DROP [19:2321]
:OUTPUT DROP [2:116]
-A SSHATTACK -j LOG --log-prefix "Possible SSH attack! " --log-level 7
-A INPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A INPUT -p tcp -m tcp --dport 22 -j ACCEPT
-A INPUT -p tcp -m state --dport 22 --state NEW -m recent --set
-A INPUT -p tcp -m state --dport 22 --state NEW -m recent --update --seconds 120 --hitcount 4 -j SSHATTACK
-A OUTPUT -m conntrack --ctstate RELATED,ESTABLISHED -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 636 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 443 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 389 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 123 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 80 -j ACCEPT
-A OUTPUT -p udp -m udp --dport 53 -j ACCEPT
-A OUTPUT -p tcp -m tcp --dport 22 -j ACCEPT

Server 2019


Datacenter Edition 6XBNX-4JQGW-QX6QG-74P76-72V67

Standard Edition MFY9F-XBN2F-TYFMP-CCV49-RMYVH






MTU is an issue for Metrocast, they assign a DHCP value of 576 while 1500 is actually needed. To fix that you need to do the folowing:

In /var/chroot-dhcpc/etc there is a file named: default.conf


interface "[<INTERFACE>]" {
timeout 20;
retry 60;
script "/usr/sbin/dhcp_updown.plx";
request subnet-mask, broadcast-address, time-offset,
routers, domain-name, domain-name-servers, host-name,
domain-search, nis-domain, nis-servers,
ntp-servers, interface-mtu;

"interface-mtu" : If you remove that (not the following ;!!!), and take your interface down/up, your MTU is possible to edit by hand again in the GUI.

AND ... it will use the number you give it, not the dumb MTU value one of your ISP's let be in their equipment because they did not bother to change it.


Home licenses cannot be added to a hardware install, you need to remove the ASG_ID line from /etc/asg


I was stupid, the install was easy and the auth server was straight forward. There is no need for SSO configuration.

Add the login group under WebAdmin Settings. If you need help: https://community.sophos.com/kb/en-us/120348

html5 remote access

Allow the user portal

From Management > User Portal > Global, click on the folder beside ‘Allowed networks’ then drag ‘Any’ into the box. You may want to restrict this more, but it’s likely you will have people both inside and outside your firewall who will want to access the User Portal.

The portal was easy to set up, you need to use NLA auth for RDP and set a login. I defined portals by user instead of group because of this.

html5 sites

Again, simple. Ports are under the advanced tab. Dont forget to run the cert finder, the last box in the addition menu.


This is nifty as hell, it is just an implementation of OpenVPN.

The configs are generated and served on the remote access site


It is a bitch.




To allow UTM to resolve host names I needed to add Sora as a forwarder, and set internal network to use it. Those are the first and second tabs of DNS under Network Services.

use "DNS Request Route" to forward "domain.local" to AD DNS server.









Basic guide (the official docs are the same, this is just condensed): http://khmerdigital.net/2017/12/11/how-to-install-fog-in-centos-7/

  1. Prepare The System

Configure firewalld

yum install firewalld -y
systemctl start firewalld
systemctl enable firewalld 
for service in http https tftp ftp mysql nfs mountd rpc-bind proxy-dhcp samba; do firewall-cmd --permanent --zone=public --add-service=$service; 

echo "Open UDP port 49152 through 65532, the possible used ports for fog multicast" 
firewall-cmd --permanent --add-port=49152-65532/udp
echo "Allow IGMP traffic for multicast"
firewall-cmd --permanent --direct --add-rule ipv4 filter INPUT 0 -p igmp -j ACCEPT
systemctl restart firewalld.service
echo "Done."   

Add firewalld exceptions for DHCP and DNS (In case, you are going to run DHCP in FOG server)

for service in dhcp dns; do firewall-cmd --permanent --zone=public --add-service=$service; done
firewall-cmd --reload
echo Additional firewalld config done.
Set SELinux to permissive on boot
sed -i.bak 's/^.*\SELINUX=enforcing\b.*$/SELINUX=permissive/' /etc/selinux/config  
  1. Setup FOG

To install latest version of the FOG by using git, it can be found here.

yum install git -y
cd ~
mkdir git
cd git
git clone https://github.com/FOGProject/fogproject.git
cd fogproject/bin
echo "Congratulation! FOG is installed in your server"


Here are the settings FOG will use:

If you would like to backup your FOG database you can do so using MySQL Administrator or by running the following command in a terminal window (Applications->System Tools->Terminal), this will save the backup in your home directory.

mysqldump --allow-keywords -x -v fog > fogbackup.sql



Boot linux, register it with FOG then click capture twice under hosts. The first will make a image association and the second will schedule the capture.




I added in a seperate disk for images. You just mount it to /images.



FOG isnt connecting if IPTABLES are in place, regardless of rules

FOG isnt responding to a hostanme, only IP, in browser

FOG was failing during capture, this was because I made the image resizeable disk, instead of non-resizable disk


Database Maintenance Commands

Sometimes, a host will be created with an ID of 0 (zero). Sometimes, there are MAC addresses that loose their association with a host, and in a sense become orphaned. Sometimes there are tasks that just need cleared out. Sometimes there are hosts without MACs. Sometimes groups of ID 0 get made, sometimes snapins of ID 0 get made. Sometimes snapins are associated with hosts that don't exist anymore. Other things go wrong sometimes. These things cause problems with FOG's operation and need cleared out in order to have a clean & healthy database. The below commands are intended to run on FOG 1.3, 1.4, and 1.5 series, they will clear these problems for you. This also fixes problems with multicast occasionally, where the partclone screen just sits there doing nothing.


# No password:
mysql -D fog

# The following chunk of commands will clean out most problems and are safe:
DELETE FROM `hosts` WHERE `hostID` = '0';
DELETE FROM `hostMAC` WHERE hmID = '0' OR `hmHostID` = '0';
DELETE FROM `groupMembers` WHERE `gmID` = '0' OR `gmHostID` = '0' OR `gmGroupID` = '0';
DELETE FROM `snapinGroupAssoc` WHERE `sgaID` = '0' OR `sgaSnapinID` = '0' OR `sgaStorageGroupID` = '0';
DELETE FROM `snapinAssoc` WHERE `saID` = '0' OR `saHostID` = '0' OR `saSnapinID` = '0';
DELETE FROM `hosts` WHERE `hostID` NOT IN (SELECT `hmHostID` FROM `hostMAC` WHERE `hmPrimary` = '1');
DELETE FROM `hosts` WHERE `hostID` NOT IN (SELECT `hmHostID` FROM `hostMAC`);
DELETE FROM `hostMAC` WHERE `hmhostID` NOT IN (SELECT `hostID` FROM `hosts`);
DELETE FROM `snapinAssoc` WHERE `saHostID` NOT IN (SELECT `hostID` FROM `hosts`);
DELETE FROM `groupMembers` WHERE `gmHostID` NOT IN (SELECT `hostID` FROM `hosts`);
DELETE FROM `tasks` WHERE `taskStateID` IN ("1","2","3");
DELETE FROM `snapinTasks` WHERE `stState` in ("1","2","3");
TRUNCATE TABLE multicastSessions; 
TRUNCATE TABLE multicastSessionsAssoc; 
DELETE FROM tasks WHERE taskTypeId=8;

# This one clears the history table which can get pretty large:
# This one will clear the userTracking table, This table is where user login/logout (for host computers, not the fog server) is stored. This table can also get pretty large.
TRUNCATE TABLE userTracking;


Gummi0 has realmd installed, along with SSSD and samba-common-tools then rebooted to allow realmd to work. I also dropped the base VPN conf inplace. I modified the resolv.conf to use sora first, and cloudflare second.

Backup script

ssh user@remote tar cpzf - /opt/Docker/ > /local/foo/docker_backup_$(date +%F)

PHPServerMonitor Public Status Page

In file phpservermon/src/psm/Service/User.php below line 97 $this->session = $session; add

$kl = "";
$kl = isset($_GET["kl"]) ? $_GET["kl"] : '';
if ($kl == "public"){
$user_id = 3;
$this->setUserLoggedIn($user_id, true);
$user = $this->getUser($user_id);

In file phpservermon/src/psm/Module/AbstractController.php edit line 274 to remove functions you do not want public to have.



netdata is a fun and simple performance graph for servers. I have this installed on all of my servers.

I centralize the viewing of these metrics by creating several large dashboard pages and hosting them on cerberus http://cerberus.dev0.sh/master.html and http://cerberus.dev0.sh/docker.html

I create these via scripts located here https://git.dev0.sh/piper/netdata_dashboards, the repo has information on the scripts.

To host the graphs I replaced the html directory being hosted by nginx with the /usr/share/netdata/web directory then dropped my html files inside. You can also just make a site for it or use a symlink, it doesnt really matter.


Implementing Google-Authenticator for SSH


yum install epel-release
yum install google-authenticator

Edit /etc/pam.d/sshd add auth required pam_google_authenticator.so as the second line (or first method of auth near the top)

Edit /etc/ssh/sshd_config and make sure ChallengeResponseAuthentication yes is set.

Run google-authenticator as your user to set the TOTP and choose various options for it.


TheLounge LDAP configuration

ldap: {
    enable: true,
    url: "ldap://ad.contose.com",
    baseDN: "OU=FOO,OU=ROO,DC=dev0,DC=sh",
    primaryKey: "cn"


Iframe in 16.0.X

I followed this https://help.nextcloud.com/t/solved-nextcloud-16-how-to-allow-iframe-usage/52278

Modify /var/www/nextcloud/lib/public/AppFramework/Http/ContentSecurityPolicy.php

protected $allowedFrameDomains = [
/** @var array Domains which can embed this Nextcloud instance /
protected $allowedFrameAncestors = [

I also had my previous .htaccess which does the basic "X-Frame" options allowing core.dev0.sh


Prevent DC from registering record for root of domain

By default a DC registers a "same as parent folder" record in DNS. This makes the root domain resolve to the address of any DC. Since I use https://dev0/.sh as a homepage and I have the domain name dev0.sh I cannot load my homepage without this change.

Create a DWORD called "RegisterDnsARecords" at HKLM\SYSTEM\CurrentControlSet\Services\Netlogon\Parameters set the value to 0

Note from source:

Important NOTE regarding LdapIPAdress:

If you are considering to prevent this record from being registered in DNS, there are some implications that may impact your ability to locate certain services in the domain. You should be fully aware what these implications are and how to overcome them.

Read more about LdapIPAdress: AD DS: This domain controller must register its DNS host (A/AAAA) resource records for the domain: http://technet.microsoft.com/en-us/library/dd378858(WS.10).aspx


Techsupport.dev0.sh configuration [TheLounge]

I now edit the source code of TheLounge and build it on my node to servce the r/techsupport community. This source has changes made to client/views/windows/connect.tpl and default/config.js. These changes are cosmetic for the intial login screen.

My repo is here: https://git.dev0.sh/piper/thelounge_ts

Pull the repo locally and run through the readme to get it started. I use a systemd file to run node start index.js.


RabbitMQ on CentOS 7

Install rabbitmq-server from the standard

Enable the rabbitmq-server service

Install and enable nginx, you only need port 80 so no config changes are needed. Run certbot for your domain name and then set crontab so it keeps renewing. This is where we pull our rabbit certs from.

Specify the certs we pulled in /etc/rabbitmq/rabbitmq.conf

 58    {ssl_options, [{cacertfile,           "/etc/letsencrypt/live/rabbit.dev0.sh/cert.pem"},
 59                   {certfile,             "/etc/letsencrypt/live/rabbit.dev0.sh/fullchain.pem"},
 60                   {keyfile,              "/etc/letsencrypt/live/rabbit.dev0.sh/privkey.pem"},
 61                   {verify,               verify_peer},
 62                   {fail_if_no_peer_cert, false}]},



The Plex Saga

I finally got plex working again, it was an ordeal. I had to stop running it in docker because of how many files it opened. I instead installed it straight onto my dallas node. That wasnt really an issue, just download the rpm from "Plex Download" then install it.

The issue is how to activate it, I needed to turn off cert auth to my server and initiate a putty connection, with a SSH tunnel from 8888 to 32400. I had to be local to initiate the connection. After that, I just added libraries and all was golden.

Port 32400 is needed to get through the firewall, regardless

Plex file limit:


Finding Movies

~Plex is having issues finding all of my series, I am having issues tracking down why. I am not looking at logs for the scanner and its analysis though. They are in /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Logs~

To fix this I made many more libraries, for show genres



da gist

This took a while to get going, mostly because of LDAP. For this, I used the first launch to write my env file then I mounted that internally. I had to rewrite the LDAP user field, because the $ was being left out. The next launch would likely be simpler by just writing the env file by hand.



These include the necessary environmental variables I didnt find elsewhere

You may find that you cannot log in with your initial Admin account after changing the AUTH_METHOD to ldap. To get around this set the AUTH_METHOD to standard, login with your admin account then change it back to ldap. You get then edit your profile and add your LDAP uid under the ‘External Authentication ID’ field. You will then be able to login in with that ID.



Specific to running BookStack inside a Docker container and populating the environment variables.

For Feature Requests Update documentation to either include a clarifying note or include setting in example.

use double $$ since we are using docker

Actual Behavior With the single $ sign, the expression becomes a static string and the query will never return a > result and no error message is thrown. Can be verified by replacing with a proper username.

Just a note. I had to map my .env file as a volume, and then manually edit this field to "LDAP_USER_FILTER=(&(sAMAccountName=${user}))". Even with this $$ trick that I found elsewhere it seems like the file was written with strange characters or just no $'s at all.


Backup plans and scripts


All the setups I use for xcp-ng


Encrypted SR

Create your LUKS container on an unused RAID.

I setup without a declared VM storage to facilitate this. XCP-NG 8.0 includes cryptsetup otherwise you would need to uncomment the CentOS repos in the repo files to install it. I also use a USB with a keyfile to unlock the storage at boot.

I use /dev/sdb for this setup.

[voyager ~]# parted /dev/sdb
GNU Parted 3.1
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) mklabel
New disk label type? gpt
(parted) mkpart
Partition name?  []? crypt0
File system type?  [ext2]? ext4
Start? 0%
End? 100%
(parted) quit

Create your password next, remember this but you will not use it unless it is an emergency once we are done.

[voyager ~]# cryptsetup luksFormat /dev/sdb1

This will overwrite data on /dev/sdb1 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase:
Verify passphrase:

Take your USB and mount it up, I have one already made in fat32 but anything works. We need to get the UUID

[voyager ~]# blkid
/dev/sdc1: SEC_TYPE="msdos" LABEL="KEYBLADE" UUID="FC7E-155E" TYPE="vfat"

Edit /etc/fstab and add this as a mount.

[voyager ~]# vim /etc/fstab
LABEL=root-tdgrhw	/         	ext3	defaults	1  1
LABEL=BOOT-TDGRHW	/boot/efi	vfat	defaults	0  2
LABEL=swap-tdgrhw	swap      	swap	defaults	0  0
LABEL=logs-tdgrhw	/var/log	ext3	defaults	0  2
UUID=FC7E-166E 		/mnt/key 	vfat	defaults	0 0

Mount the luks container so we can make it into an SR

[voyager ~]# cryptsetup luksOpen /dev/sdb1 crypt0
Enter passphrase for /dev/sdb1:
[21:38 voyager ~]# lsblk
sdb          8:16   0 930.5G  0 disk
└─sdb1       8:17   0 930.5G  0 part
  └─crypt0 253:0    0 930.5G  0 crypt
sr0         11:0    1  1024M  0 rom
sdc          8:32   1   1.9G  0 disk
└─sdc1       8:33   1   1.9G  0 part
sda          8:0    0  67.8G  0 disk
├─sda2       8:2    0    18G  0 part
├─sda5       8:5    0     4G  0 part  /var/log
├─sda3       8:3    0   512M  0 part  /boot/efi
├─sda1       8:1    0    18G  0 part  /
└─sda6       8:6    0     1G  0 part  [SWAP]

Mount the USB

[voyager ~]# mount /mnt/key/
[voyager ~]# lsblk
sdb          8:16   0 930.5G  0 disk
└─sdb1       8:17   0 930.5G  0 part
  └─crypt0 253:0    0 930.5G  0 crypt
sr0         11:0    1  1024M  0 rom
sdc          8:32   1   1.9G  0 disk
└─sdc1       8:33   1   1.9G  0 part  /mnt/key
sda          8:0    0  67.8G  0 disk
├─sda2       8:2    0    18G  0 part
├─sda5       8:5    0     4G  0 part  /var/log
├─sda3       8:3    0   512M  0 part  /boot/efi
├─sda1       8:1    0    18G  0 part  /
└─sda6       8:6    0     1G  0 part  [SWAP]

Create your keyfile and add it to the luks container. Then dont forget to put it on the USB.

dd if=/dev/urandom of=keyfile bs=2048 count=4
cryptsetup luksAddKey /dev/sdb1 keyfile
mv keyfile /mnt/key/keyfile

Now edit /etc/crypttab so that we can have all these parts work together

[voyager ~]# vim /etc/crypttab
crypt0 /dev/sdb1 /mnt/key/keyfile luks

Create the SR. I want to do ext4 so VMs are thin-provisioned. If you did not already know when you open a luks container it is mapped to /dev/mapper/<name>

xe sr-create type=ext name-label=crypt0 device-config:device=/dev/mapper/crypt0

Once done it should be mounted in xcp-ng center. You can right click and set it as the default. xcp-ng-encrypted-sr.PNG

Now reboot and hope you dont have to type that password... Best of luck.