June 2023 Archives

Tue Jun 13 13:50:38 +07 2023

Migration of ZeroShell firewall

Running ZeroShell on VMware ESXi

The only problem with running ZeroShell on VMware, with a bridged firewall is that the Network adapters must support promiscuity. You need to create port groups that allow promiscuous mode and forged tramsmits.

These port groups may be associated to existing virtual switches or to virtual switches created for the purpose of testing the firewall in VMware.

For the testing, I have created 3 virtuals machines: ZeroShell, a router between CSIM and internet and a web server that represent internet and is used to test the captive portal. I have created 2 virtual switches: one to connect the firewall to the router and one to connect the router to the internet server. and I have created 3 prot groups: virtual CSIM and router are in promiscuous mode and one normal virtual internet.

Installing ZeroShell

After booting the ZeroShell .iso choose the Installation Manager and proceed to install ZeroShell on the disk. No need to create a profile at this stage.

The following stages of moving and resizingdeleting and creating partitions does not really need a backup prior: we are working on an brand new system, at worse case, it can be reinstalled from scratch.

The installation will create 4 partitions:

  Device Boot      Start         End      Blocks   Id  Type      Mount
/dev/sda1   *        2048      432127      215040   83  ext3     [boot]
/dev/sda2          432128     1087487      327680   83  iso9660  /cdrom
/dev/sda3         1087488     1742847      327680   83  iso9660  [unused]
/dev/sda4         1742848   146800639    72528896   83  ext4     /DB

The first partition contains the Linux kernel. The second partition contains ZeroShell system that has been configured for the computer. The third partition is just a copy of the second partition before any modification. The last partition will be used to create the profiles.

My understanding is that the 3rd partition is not used. We need to modify the second partition to include our patches, but a partition of type iso9660 (CD Rom) is not writable. Instead we will create a copy of the second partition in partition 3, but with a type ext4.

After the installation, we remove the installation media and reboot to check that the firewall is starting normally. After entering the shell, we use fdisk to delete the partitions 3 and 4. We need to reboot after exiting fdisk.

Then we recreate the partitions 3 and 4 with fdisk. Even if I used only 1 GB on the partition 3, I choose to make it 50 GB. Partition 4 uses all the rest of the available disk. Reboot after exiting fdisk.

An ext4 file system is created on partitions 3 and 4:

mkfs -t ext3 /dev/sda3
mkfs -t ext3 /dev/sda4

Next we will copy the second partition on to partition 3:

mount /dev/sda3 /mnt
cd /cdrom
tar cf - . --one-file-system|(cd /mnt; tar xfBp -)
umount /mnt
mount /dev/sda3 /cdrom

The option --one-file-system is very important in the tar copy; it ensures that the copy will not try to follow links across file systems. The copy will result in a system that is not working if the option is ommitted.

Once the copy has been achieved, mounting the new partition 3 on top of /cdrom makes it active, the system should still be running normally. Then reboot the system.

Configuring ZeroShell

A new profile can be created. Save it in the partition 4. Choose the interface ETH01 for the management interface and give it an IP address different from the running firewall: as it is a transparent/bridged firewall, the IP address is not important for the client devices in CSIM. Use normal CSIM router address for the gateway.

Activate that new profile, it will reboot the firewall. The firewall is now accessible on the web managing interface. Enable ssh and try a command line connection:

ssh admin@IP address

Get a to a shell prompt and try mounting /dev/sda3 on top of /cdrom. The web interface should still be working fine. In the Pre Boot script, add the command:

mount /dev/sda3 /cdrom

Reboot and check that /dev/sda3 is properly mounted.

We can now manually copy the configuration from the existing firewall to the virtual one. The new firewall must be configured manually, copying the runninng configuration from the existing firewall. I could not just copy the profile because it introduced some incompatibilities, it had to be recreated from scratch, using the existing firewall as a model.

Using a virtual machine that is interacting with CSIM network allows me to use our existing DNS and radius servers. Radius has to be configured to allow the IP of the new firewall in clients.conf and sites-enables/default. Note that for sites-enables/default, the if statement must go on a single line.

Finally, configured the firewall section. I used tar to copy the active profile from the old firewall /DB/_DB.002 to a /DB/_DB.002 profile on the new firewall. This profile must not be activated! From that profile, I the copied /DB/_DB.002/var/register/system/net/FWChains to the active profile /Database/var/register/system/net/FW/Chains.

The firewall rules have to be edited to replace any occurence of the IP address of the old firewall with the IP address of the new firewall. The command iptables-save|grep IP address will show the list.

Similarily, any rule involving the router IP address has to be duplicated to include the IP address of the virtual router.

The virtual web server used to test the captive portal is using a private IP address. These addresses are blocked by the firewall. To allow the test of the captive portal, a rule must be added into the chains private_from_out (incoming packets from the outside) and privateout/30 prefix, to only include the virtual router and the virtual web server.

Once the firewall will come to production, the duplicated rules for the virtual router and the rules to temporarily open 192.168.0.0/30 should be removed.

Customization

All customization will be save in /Database. There is a /Database/Patch directory that stores the patches that are automatically applied to the system at boot time. The "automatic applied" is managed by the Pre Boot script.

There are two areas that I customized: I want the Captive Portal to allow free Authorized service to all AIT subnet and not to a single IP only, these are the patches cpAddClient, cpAddService and fw_sys_chain. Beside modifying the web interface, it includes some modification to the firewall rules.

The other patches only change the colour scheme used by the Captive Portal, as seen by the users.

The Pre Boot script to apply the patches:

cd /
if [ -d '/Database/Patch' ] ; then
  for i in /Database/Patch/* ; do
    if [ "$i" != '/Database/Patch/*' ] ; then
      patch -p0 -r /dev/null <$i
    fi
  done
fi

Installing software

The most demanding part was making Amanda backup available as it needed cross compiling.

I used crosstool-NG environment to compile Amanda. I build it with the configuration here. I used the option 3 when using the tool chain.

I configured and built Amanda statically. I choose an old version Amanda 2.4.5p1 because it did not require the use of glib. I had to link each program in client-src manually and I then copied the in the folder /Database/work/amanda on the firewall. Note that runtar must be suid root. configure for Amanda:

./configure --prefix=/opt --enable-static --enable-static-binary \
--with-user=amanda --with-group=amanda --without-ipv6 \
--with-index-server=amanda.cs.ait.ac.th --without-server \
--with-gnutar=/opt/bin/tar \
--with-gnutar-listdir=/opt/var/amanda/gnutar-lists \
-build=i686-unknown-linux-gnu --target=i686-unknown-linux-gnu \
CPPFLAGS="-static -I$HOME/staging4/usr/include" \
CFLAGS="-static -m32 -I$HOME/staging4/usr/include -L$HOME/staging4/lib -L$HOME/staging4/usr/lib" \
LDFLAGS="-static -L$HOME/staging4/lib -L$HOME/staging4/usr/lib"

I also built xinetd to run Amanda. I had to use CFLAGS instead of CPPFLAGS and add the options -m32 -static. I installed xinetd in /Database/work/bin on the firewall. Note that xinet does not follow the DESTDIR flag, do not try to install it. configure for xinetd:

./configure --build=i686-unknown-linux-gnu --target=i686-unknown-linux-gnu\
--prefix=/usr CPPFLAGS="-I$HOME/staging4/usr/include" \
CFLAGS="-static -m32 -I$HOME/staging4/usr/include -L$HOME/staging4/lib -L$HOME/staging4/usr/lib"\
LDFLAGS="-staic -L$HOME/staging4/lib -L$HOME/staging4/usr/lib"

The following lines in the Pre Boot script on the firewall take care of creating Amanda environment at each reboot:

echo amanda:x:14:14:Amanda:/opt/var/amanda:/bin/false >>/etc/passwd
echo amanda:x:14: >>/etc/group
cd /Database/work
cp /Database/work/xinetd.conf /etc
cp /Database/work/xinetd /etc/rc.d/init.d/
ln -s /Database/work/amanda /opt/libexec
ln -s /Database/work/amanda-var/amandates /etc
mkdir -p /opt/var
ln -s /Database/work/amanda-var /opt/var/amanda
cp /Database/work/bin/tar /bin/tar
cp /Database/work/bin/em /bin/emacs

# To make it available for backup
mkdir /boot2
mount /dev/sda1 /boot2

I also installed GNU tar version 1,34 as well as µemacs.

configure for GNU tar:

./configure --build=i686-unknown-linux-gnu --target=i686-unknown-linux-gnu \
--prefix=/usr CPPFLAGS="-I$HOME/staging4/usr/include" \
CFLAGS="-static -m32 -I$HOME/staging4/usr/include -L$HOME/staging4/lib -L$HOME/staging4/usr/lib"
LDFLAGS="-static -L$HOME/staging4/lib -L$HOME/staging4/usr/lib"

The script Post Boot starts the service xinetd once the firewall is online.

Finally, I created a cron script that run every night at 01:00 and make a list of the rules that have expired. That list is uploaded to ufo to be used when expiring users.

SSL certificate

As this new firewall uses recent version of cypher, it is worth having an official SSL certificate installed.

ZeroShell captive portal works with itpables redirection, at IP level; the certificate must support IP address. Let's Encrypt free certificates do not support certificates for IP address, I purchased a certificate from DotArai, as a reseller for Sectigo.

While a certificate for nothing more than an IP address is possible, it seems that some web browsers do not support them, so the certificate must be issued for a FDQN, with a Subject Aletnative Name for the IP address; this is also called a multi-domains certificate. The solution I bought is Sectigo (Comodo) PositiveSSL Multi-Domains that allows up to 3 FDQN (or one FDQN and one IP address in our case).

Key and CSR has been generated locally, the configuration file to generate the CSR is saved at the same location as the CSR itself.

The certificate was issued in 24 hours and payement could be made by Credit Card or defered and paid on Invoice. The certificate comes with 3 files: the certificate itself, an intermediate CA and Sectigo CA.

The intermediate CA must be installed on the firewall with the TrustedCA, while the certificate and the private key are installed in the importer certificates. The certificate has to be enabled on the web management page as well as the captive portal authentication page.

It seems that the verification for issuing the certificate can go by email or by a challenge web page. While DotArai registration page asks for an email address, the transaction was made with a challenge over the web. Email has been configured to add CSIM MX reccord to the firewall email address, so any email directed to the firewall should be received by the normal CSIM email system.

In our case, I communicated directly with a staff at DotArai and the verification was made via a web page challenge, the challenge token must be installed in the folder /usr/local/apache2/htdocs and the firewall configuration has to be modified so that the web server is accessible from outside: this may include adding rules in the firwall chain web_servers as well as adding interface ETH00 or IP addresses in the configuration of the web management. These modifications should be reverted as soon as possible.

The current certificate expires in June 2024.

Porting the virtual firewall to a physical machine

The installation must be done from a CD Rom. While the firewall will boot and run from a USB key, the installation script request the access to /dev/cdrom. It can be done using an external CD drive connected via USB.

The steps of installing ZeroShell and repartitioning the disks must be done on the physical machine, then give a temporary IP address to the firewall and copy the profile from the virtual firewall to the physical machine.

Remove or disable any rule concerning the virtual router.

Error about the rule inside_client

During the boot, an error is shown complaining about loading the rule inside_client, I think it is because a this stage of the boot, the firewall have not yet been fully loaded and the chain inside_client does not exist yet. When the firewall has finished booting, it seems the error is not a problem anymore.


Posted by Olivier | Permanent link | File under: administration, firewall

Fri Jun 9 11:59:07 +07 2023

Another way to shutdown all the systems

When a power interruption is planed, it is easy to set-up a programmed shutdown of the machines, using a scheduled shutdown.

The machines should be stopped in the following order that will conform with the interdependancies between the services:

First, stop the machines that no other service depends on:
The machines are ufo, mail, firewall, router, bazooka, active, amanda, database, door, guppy, gourami, puffer and exam.
Second, stop the file server:
The machine is banyan.
Third, stop DNS and authentication:
The machines are dns and ldap.
Fourth, stop the syslog server:
The machine is sysl.
Fifth, stop the virtual servers:
The machines are virtual, virtual2, virtual3, virtual4, virtual5, teal1 and teal2.

There should be approximately 10 minutes between each steps, so if the first phase should stop on Sunday at 7:00, the second phase will be on Sunday at 7:10, etc. and the last phase on Sunday at 7:40.

Add a crontab entry line in root user on each machine, this will allow to schedule the shutdown a long time in advance.

The date and time for crontab are as follow:
minute hour day-of-the-month month day-of-the-week
where the minutes are between 0—59, the hour 0—23 and Sunday is represented by either 0 or 7.

On FreeBSD the command is:
0 7 * * 0 /sbin/shutdown -p now
On Linux the command is:
0 7 * * 0 /sbin/shutdown -h now
On ESXi the procedure is:
Note: The time on ESXi servers is UTC, so the clock is 7 hours back: to stop the machine a 7:40, you must enter a time of 0:40. If you plan to stop the server around midnight, you must enter the time as 17:00.
  1. Go to /var/spool/cron/crontab
  2. Change the file root to writable chmod 600 root
  3. Edit the file root and add the line 40 0 * * 0 /bin/poweroff
  4. Find the crond process ps -c|grep crond
  5. And kill it
  6. Restart the crond process /usr/lib/vmware/busybox/bin/busybox crond

After the system has bee restarted, you must remove or comment out the line in the crontab (except on ESXi servers and firewall); else the system will shutdown on the next Sunday.

Amanda starts a backup run every night at 00:05, it is better to halt the machine around 23:50, before a run, than just when a new run has started for example at 00:15.

Do not use the -p option with the door machine because it restarts automatically as soon as it is powered down, it is better to keep the system halted but still with power on.


Posted by Olivier | Permanent link | File under: administration, firewall, vmware, freebsd, router