Thu Nov 16 11:33:35 +07 2023

Using Out of Band connection to CSIM

Out of Band (OOB) can be used in a case of an incident where CSIM router or fireall is unresponsive and need a reboot (like the incident of November 11th, 2023). It allows to by-pass the router and the firewall and still gain access to CSIM network. 1) Connect to AIT VPN Download AIT VPN configuration from https://helpdesk.ait.ac.th/wp-content/uploads/sites/2/downloads/AIT_Net_vpn_ait_ac_th.ovpn Connect to AIT VPN with a command like (for Ubuntu): sudo openvpn --config AIT_Net_vpn_ait_ac_th.ovpn NOTE: You *must* use your *AIT* credentials to connect. More information about AIT VPN at https://helpdesk.ait.ac.th/services/ait-vpn/ 2) Connect to OOB device ssh -i <key_file> -p 2222 on@oob-ait.cs.ait.ac.th NOTE: You *must* authenticate with a key_file. NOTE: you *must* use port 2222. If you need more than a console access, for example if you need to access to the GUI of the firewall: sudo ssh -i <key_file> -p 2222 -L 443:firewall:443 on@oob-ait.cs.ait.ac.th VMware GUI/vmplayer needs port 443. Proxmox GUI needs port 8006. I don't *think* you can use CSIM VPN at this stage.

Posted by Olivier | Permanent link | File under: administration

Thu Jul 13 10:02:18 +07 2023

ZeroShell v3.9.5 is lazy to reload rules

It seems that the new release of ZeroShell takes quite some time to reload the rules once a rule is added or changed. It may take a matter of minutes.

Never ever attempt to restart the iptables service manually, it would restart the firewall with very limited configuration, leaving the network widely exposed.


Posted by Olivier | Permanent link | File under: administration, firewall

Mon Jul 3 13:20:18 +07 2023

Print quota synchronization

AIT has a print from anywhere function working with the software PaperCut that associate print quota to the users. CSIM also has print quota. Both system must be synchronized.

The synchronization is set in two steps: once a night around 00:55. PaperCut sends emails to postmaster@cs.ait.ac.th with the detail of the pages printed by the users of each FoS. Then around 10:25 we push the new credit available for each user.

Synchronization from PaperCut to CSIM

This is the most tricky part! Once a night postmaster@cs.ait.ac.th receives one email for each of our FoS. The emails are forwarded to on@cs.ait.ac.th and processed by a procmail receipt that call the script ~on/Ldap/procmail-extract-papercut.

This script checks that the email has a correct contents, that a .csv file is attached with the proper name. The attachment is saved into a file and the script ~on/Ldap/do-report-papercut.pl is called on the file.

The second script is more complicated. As reports are sent once a night, if one report is lost, some data is missing. The script keeps track of the last report that has been processed in the file ~on/Ldap/timestamp-papercut.db. There is one timestamp for each FoS.

Daily reports are covering the period from 00:00:00 to 23:59:59 the previous day. So every new report should start exactly one second after the previous one. The period covered by a report is written in the headers of the .csv file.

The script takes into account gaps in the report coverage as well as overlaps. If an error occurs in the period sequence, the file is not processed; instead, an email is generated that highlight the conflict and suggest to generate a report manually, the manual report can then be processed by that same script.

Once the period of the report has been validated, the print usage for the users listed in the report are updated accordingly.

The script also generates a list of users that are known in PaperCut but not in CSIM LDAP. That should only happen if a user has been wrongly associated with a FoS; that can only be rectified at PaperCut level.

Finally, the script compares the cost calculated by PaperCut and the cost computed by CSIM. If there is a discrepancy, it suggests that the rates in LDAP ou=Ricoh cost, ou=Resources does not correspond to the cost used by PaperCut. The costs should be updated in LDAP; there are 3 records to update for Black&White, colour and the ration B&W/colour. This is not a blocking error.

How to generate a report manually?

When there is an error in the synchronization from PaperCut to CSIM, a manual report must be generated to correct the error.

You must connect to PaperCut web administration interface then go to the report menu. Chose to create an Ad hoc User Printing Menu.

You have to select the group in the form of SET/ICT/FoS and enter the starting and ending date and time for the report. For the reason explained above, the >i>date and time must be very strict else the reports will be deemed not consecutive.

Once the .csv file has been saved, you can apply the script ~on/Ldap/do-report-papercut.pl on the file.

Synchronization from CSIM to PaperCut

The synchronization is much simpler in this direction. Every day at 10:25 the script ~on/Ldap/update-account-papercut.pl is run via cron(8).

Any user that is in CSIM LDAP but not in PaperCut are either a mistake (a user from another FoS where the attribute csimIsCsim is not False) or a deleted user. In either case, setting csimIsCsim to False in LDAP for that user will solve the problem.

The script goes through the list of users in PaperCut and in LDAP and computes the number of pages printed and the remaining credit, The credit is then updated in PaperCut. Deleted users in LDAP have a credit set to zero.


Posted by Olivier | Permanent link | File under: administration, printing

Tue Jun 13 13:50:38 +07 2023

Migration of ZeroShell firewall

Running ZeroShell on VMware ESXi

The only problem with running ZeroShell on VMware, with a bridged firewall is that the Network adapters must support promiscuity. You need to create port groups that allow promiscuous mode and forged tramsmits.

These port groups may be associated to existing virtual switches or to virtual switches created for the purpose of testing the firewall in VMware.

For the testing, I have created 3 virtuals machines: ZeroShell, a router between CSIM and internet and a web server that represent internet and is used to test the captive portal. I have created 2 virtual switches: one to connect the firewall to the router and one to connect the router to the internet server. and I have created 3 prot groups: virtual CSIM and router are in promiscuous mode and one normal virtual internet.

Installing ZeroShell

After booting the ZeroShell .iso choose the Installation Manager and proceed to install ZeroShell on the disk. No need to create a profile at this stage.

The following stages of moving and resizingdeleting and creating partitions does not really need a backup prior: we are working on an brand new system, at worse case, it can be reinstalled from scratch.

The installation will create 4 partitions:

  Device Boot      Start         End      Blocks   Id  Type      Mount
/dev/sda1   *        2048      432127      215040   83  ext3     [boot]
/dev/sda2          432128     1087487      327680   83  iso9660  /cdrom
/dev/sda3         1087488     1742847      327680   83  iso9660  [unused]
/dev/sda4         1742848   146800639    72528896   83  ext4     /DB

The first partition contains the Linux kernel. The second partition contains ZeroShell system that has been configured for the computer. The third partition is just a copy of the second partition before any modification. The last partition will be used to create the profiles.

My understanding is that the 3rd partition is not used. We need to modify the second partition to include our patches, but a partition of type iso9660 (CD Rom) is not writable. Instead we will create a copy of the second partition in partition 3, but with a type ext4.

After the installation, we remove the installation media and reboot to check that the firewall is starting normally. After entering the shell, we use fdisk to delete the partitions 3 and 4. We need to reboot after exiting fdisk.

Then we recreate the partitions 3 and 4 with fdisk. Even if I used only 1 GB on the partition 3, I choose to make it 50 GB. Partition 4 uses all the rest of the available disk. Reboot after exiting fdisk.

An ext4 file system is created on partitions 3 and 4:

mkfs -t ext3 /dev/sda3
mkfs -t ext3 /dev/sda4

Next we will copy the second partition on to partition 3:

mount /dev/sda3 /mnt
cd /cdrom
tar cf - . --one-file-system|(cd /mnt; tar xfBp -)
umount /mnt
mount /dev/sda3 /cdrom

The option --one-file-system is very important in the tar copy; it ensures that the copy will not try to follow links across file systems. The copy will result in a system that is not working if the option is ommitted.

Once the copy has been achieved, mounting the new partition 3 on top of /cdrom makes it active, the system should still be running normally. Then reboot the system.

Configuring ZeroShell

A new profile can be created. Save it in the partition 4. Choose the interface ETH01 for the management interface and give it an IP address different from the running firewall: as it is a transparent/bridged firewall, the IP address is not important for the client devices in CSIM. Use normal CSIM router address for the gateway.

Activate that new profile, it will reboot the firewall. The firewall is now accessible on the web managing interface. Enable ssh and try a command line connection:

ssh admin@IP address

Get a to a shell prompt and try mounting /dev/sda3 on top of /cdrom. The web interface should still be working fine. In the Pre Boot script, add the command:

mount /dev/sda3 /cdrom

Reboot and check that /dev/sda3 is properly mounted.

We can now manually copy the configuration from the existing firewall to the virtual one. The new firewall must be configured manually, copying the runninng configuration from the existing firewall. I could not just copy the profile because it introduced some incompatibilities, it had to be recreated from scratch, using the existing firewall as a model.

Using a virtual machine that is interacting with CSIM network allows me to use our existing DNS and radius servers. Radius has to be configured to allow the IP of the new firewall in clients.conf and sites-enables/default. Note that for sites-enables/default, the if statement must go on a single line.

Finally, configured the firewall section. I used tar to copy the active profile from the old firewall /DB/_DB.002 to a /DB/_DB.002 profile on the new firewall. This profile must not be activated! From that profile, I the copied /DB/_DB.002/var/register/system/net/FWChains to the active profile /Database/var/register/system/net/FW/Chains.

The firewall rules have to be edited to replace any occurence of the IP address of the old firewall with the IP address of the new firewall. The command iptables-save|grep IP address will show the list.

Similarily, any rule involving the router IP address has to be duplicated to include the IP address of the virtual router.

The virtual web server used to test the captive portal is using a private IP address. These addresses are blocked by the firewall. To allow the test of the captive portal, a rule must be added into the chains private_from_out (incoming packets from the outside) and privateout/30 prefix, to only include the virtual router and the virtual web server.

Once the firewall will come to production, the duplicated rules for the virtual router and the rules to temporarily open 192.168.0.0/30 should be removed.

Customization

All customization will be save in /Database. There is a /Database/Patch directory that stores the patches that are automatically applied to the system at boot time. The "automatic applied" is managed by the Pre Boot script.

There are two areas that I customized: I want the Captive Portal to allow free Authorized service to all AIT subnet and not to a single IP only, these are the patches cpAddClient, cpAddService and fw_sys_chain. Beside modifying the web interface, it includes some modification to the firewall rules.

The other patches only change the colour scheme used by the Captive Portal, as seen by the users.

The Pre Boot script to apply the patches:

cd /
if [ -d '/Database/Patch' ] ; then
  for i in /Database/Patch/* ; do
    if [ "$i" != '/Database/Patch/*' ] ; then
      patch -p0 -r /dev/null <$i
    fi
  done
fi

Installing software

The most demanding part was making Amanda backup available as it needed cross compiling.

I used crosstool-NG environment to compile Amanda. I build it with the configuration here. I used the option 3 when using the tool chain.

I configured and built Amanda statically. I choose an old version Amanda 2.4.5p1 because it did not require the use of glib. I had to link each program in client-src manually and I then copied the in the folder /Database/work/amanda on the firewall. Note that runtar must be suid root. configure for Amanda:

./configure --prefix=/opt --enable-static --enable-static-binary \
--with-user=amanda --with-group=amanda --without-ipv6 \
--with-index-server=amanda.cs.ait.ac.th --without-server \
--with-gnutar=/opt/bin/tar \
--with-gnutar-listdir=/opt/var/amanda/gnutar-lists \
-build=i686-unknown-linux-gnu --target=i686-unknown-linux-gnu \
CPPFLAGS="-static -I$HOME/staging4/usr/include" \
CFLAGS="-static -m32 -I$HOME/staging4/usr/include -L$HOME/staging4/lib -L$HOME/staging4/usr/lib" \
LDFLAGS="-static -L$HOME/staging4/lib -L$HOME/staging4/usr/lib"

I also built xinetd to run Amanda. I had to use CFLAGS instead of CPPFLAGS and add the options -m32 -static. I installed xinetd in /Database/work/bin on the firewall. Note that xinet does not follow the DESTDIR flag, do not try to install it. configure for xinetd:

./configure --build=i686-unknown-linux-gnu --target=i686-unknown-linux-gnu\
--prefix=/usr CPPFLAGS="-I$HOME/staging4/usr/include" \
CFLAGS="-static -m32 -I$HOME/staging4/usr/include -L$HOME/staging4/lib -L$HOME/staging4/usr/lib"\
LDFLAGS="-staic -L$HOME/staging4/lib -L$HOME/staging4/usr/lib"

The following lines in the Pre Boot script on the firewall take care of creating Amanda environment at each reboot:

echo amanda:x:14:14:Amanda:/opt/var/amanda:/bin/false >>/etc/passwd
echo amanda:x:14: >>/etc/group
cd /Database/work
cp /Database/work/xinetd.conf /etc
cp /Database/work/xinetd /etc/rc.d/init.d/
ln -s /Database/work/amanda /opt/libexec
ln -s /Database/work/amanda-var/amandates /etc
mkdir -p /opt/var
ln -s /Database/work/amanda-var /opt/var/amanda
cp /Database/work/bin/tar /bin/tar
cp /Database/work/bin/em /bin/emacs

# To make it available for backup
mkdir /boot2
mount /dev/sda1 /boot2

I also installed GNU tar version 1,34 as well as µemacs.

configure for GNU tar:

./configure --build=i686-unknown-linux-gnu --target=i686-unknown-linux-gnu \
--prefix=/usr CPPFLAGS="-I$HOME/staging4/usr/include" \
CFLAGS="-static -m32 -I$HOME/staging4/usr/include -L$HOME/staging4/lib -L$HOME/staging4/usr/lib"
LDFLAGS="-static -L$HOME/staging4/lib -L$HOME/staging4/usr/lib"

The script Post Boot starts the service xinetd once the firewall is online.

Finally, I created a cron script that run every night at 01:00 and make a list of the rules that have expired. That list is uploaded to ufo to be used when expiring users.

SSL certificate

As this new firewall uses recent version of cypher, it is worth having an official SSL certificate installed.

ZeroShell captive portal works with itpables redirection, at IP level; the certificate must support IP address. Let's Encrypt free certificates do not support certificates for IP address, I purchased a certificate from DotArai, as a reseller for Sectigo.

While a certificate for nothing more than an IP address is possible, it seems that some web browsers do not support them, so the certificate must be issued for a FDQN, with a Subject Aletnative Name for the IP address; this is also called a multi-domains certificate. The solution I bought is Sectigo (Comodo) PositiveSSL Multi-Domains that allows up to 3 FDQN (or one FDQN and one IP address in our case).

Key and CSR has been generated locally, the configuration file to generate the CSR is saved at the same location as the CSR itself.

The certificate was issued in 24 hours and payement could be made by Credit Card or defered and paid on Invoice. The certificate comes with 3 files: the certificate itself, an intermediate CA and Sectigo CA.

The intermediate CA must be installed on the firewall with the TrustedCA, while the certificate and the private key are installed in the importer certificates. The certificate has to be enabled on the web management page as well as the captive portal authentication page.

It seems that the verification for issuing the certificate can go by email or by a challenge web page. While DotArai registration page asks for an email address, the transaction was made with a challenge over the web. Email has been configured to add CSIM MX reccord to the firewall email address, so any email directed to the firewall should be received by the normal CSIM email system.

In our case, I communicated directly with a staff at DotArai and the verification was made via a web page challenge, the challenge token must be installed in the folder /usr/local/apache2/htdocs and the firewall configuration has to be modified so that the web server is accessible from outside: this may include adding rules in the firwall chain web_servers as well as adding interface ETH00 or IP addresses in the configuration of the web management. These modifications should be reverted as soon as possible.

The current certificate expires in June 2024.

Porting the virtual firewall to a physical machine

The installation must be done from a CD Rom. While the firewall will boot and run from a USB key, the installation script request the access to /dev/cdrom. It can be done using an external CD drive connected via USB.

The steps of installing ZeroShell and repartitioning the disks must be done on the physical machine, then give a temporary IP address to the firewall and copy the profile from the virtual firewall to the physical machine.

Remove or disable any rule concerning the virtual router.

Error about the rule inside_client

During the boot, an error is shown complaining about loading the rule inside_client, I think it is because a this stage of the boot, the firewall have not yet been fully loaded and the chain inside_client does not exist yet. When the firewall has finished booting, it seems the error is not a problem anymore.


Posted by Olivier | Permanent link | File under: administration, firewall

Fri Jun 9 11:59:07 +07 2023

Another way to shutdown all the systems

When a power interruption is planed, it is easy to set-up a programmed shutdown of the machines, using a scheduled shutdown.

The machines should be stopped in the following order that will conform with the interdependancies between the services:

First, stop the machines that no other service depends on:
The machines are ufo, mail, firewall, router, bazooka, active, amanda, database, door, guppy, gourami, puffer and exam.
Second, stop the file server:
The machine is banyan.
Third, stop DNS and authentication:
The machines are dns and ldap.
Fourth, stop the syslog server:
The machine is sysl.
Fifth, stop the virtual servers:
The machines are virtual, virtual2, virtual3, virtual4, virtual5, teal1 and teal2.

There should be approximately 10 minutes between each steps, so if the first phase should stop on Sunday at 7:00, the second phase will be on Sunday at 7:10, etc. and the last phase on Sunday at 7:40.

Add a crontab entry line in root user on each machine, this will allow to schedule the shutdown a long time in advance.

The date and time for crontab are as follow:
minute hour day-of-the-month month day-of-the-week
where the minutes are between 0—59, the hour 0—23 and Sunday is represented by either 0 or 7.

On FreeBSD the command is:
0 7 * * 0 /sbin/shutdown -p now
On Linux the command is:
0 7 * * 0 /sbin/shutdown -h now
On ESXi the procedure is:
Note: The time on ESXi servers is UTC, so the clock is 7 hours back: to stop the machine a 7:40, you must enter a time of 0:40. If you plan to stop the server around midnight, you must enter the time as 17:00.
  1. Go to /var/spool/cron/crontab
  2. Change the file root to writable chmod 600 root
  3. Edit the file root and add the line 40 0 * * 0 /bin/poweroff
  4. Find the crond process ps -c|grep crond
  5. And kill it
  6. Restart the crond process /usr/lib/vmware/busybox/bin/busybox crond

After the system has bee restarted, you must remove or comment out the line in the crontab (except on ESXi servers and firewall); else the system will shutdown on the next Sunday.

Amanda starts a backup run every night at 00:05, it is better to halt the machine around 23:50, before a run, than just when a new run has started for example at 00:15.

Do not use the -p option with the door machine because it restarts automatically as soon as it is powered down, it is better to keep the system halted but still with power on.


Posted by Olivier | Permanent link | File under: administration, firewall, vmware, freebsd, router

Fri May 5 15:14:45 +07 2023

Some issues with Amanda

24 March 2023 > I have had Amanda running for over a decade, yesterday I had no issue at > all but last night, my backups for Ubuntu machines started crashing > consistently with the error: > strange(?): runtar: error [runtar invalid option: -] > > The just-released amanda package upgrade seems to have a regression for > GNUTAR DLEs; see: > https://bugs.launchpad.net/debian/+source/amanda/+bug/2012536/ See amanda.conf for the workaround. ====================================================================== 2 May 2023 (reported to amanda-hackers@amanda.org on 5 May 2023) I recentrly had amrecover that hanged because amidxtaped would die with the following message in amidxtaped.log: Tue May 2 12:38:48 2023: thd-0x802e95800: amidxtaped: warning: Use of uninitialized value $str in pattern match (m//) at /usr/local/lib/perl5/site_perl/Amanda/DB/Catalog.pm line 764. Tue May 2 12:38:48 2023: thd-0x802e95800: amidxtaped: warning: Use of uninitialized value $s in concatenation (.) or string at /usr/local/lib/perl5/site_perl/Amanda/DB/Catalog.pm line 764. Tue May 2 12:38:48 2023: thd-0x802e95800: amidxtaped: critical (fatal): '' at /usr/local/lib/perl5/site_perl/Amanda/DB/Catalog.pm line 764. amidxtaped: '' at /usr/local/lib/perl5/site_perl/Amanda/DB/Catalog.pm line 764. Digging into Catalog.pm, I managed to identify the log file that was causing the error and the specific line in the log file is: DONE taper WARNING driver Taper protocol error Modify /usr/local/lib/perl5/site-perl/Amanda/DB/Catalog.pm to print the name of the file.

Posted by Olivier | Permanent link | File under: administration, backup

Tue May 2 12:26:10 +07 2023

About firewall certificate

Let's Encrypt does not do IP certificate. The redirection in the firewall works with the IP address (redirecting the traffic for authentication to https://a.b.c.d:112081, this mechanism cannot be changed, it is set inside an executable that does not have source file) so it is not possible to set a certified certificate for the firewall.

Also, I did try to install a certificate manually and completely messed up the firewall to the point that Apache was not starting, so there was no more web administration interface no Captive Portal.

I had to restore the following files from backup:

  • /DB/_DB.002/etc/ssl/certs/admin_user.pem
  • /DB/_DB.002/etc/ssl/certs/crl.pem
  • /DB/_DB.002/etc/ssl/certs/fireall.cs.ait.ac.th_host.pem
  • /DB/_DB.002/etc/ssl/certs/imported_Certs/04.pem

This is assuming that we are using the profile 2 and that the certificate installed is 4.

Do not mess manually with the certificates.

Even with no Apache and no Active Portal, the firewall filtering was active.


Posted by Olivier | Permanent link | File under: administration, firewall, backup

Thu Feb 23 11:46:08 +07 2023

Change redis owner after update on mail server

The upgrade procedure or redis reset the ownership of the database to the default user "redis". For historical reasons, we are using the user "kluser" (because Kaspersky did not allow to use anything else). After upgrading redis, you must set the ownership to kluser on both /var/run/redis and /var/run/db On mail server: sudo chown -R kluser /var/run/redis sudo chown -R kluser /var/db/redis

Posted by Olivier | Permanent link | File under: administration

Thu Feb 16 13:32:52 +07 2023

Managing VPN certificates

User can create their own VPN certificates on the account management web page. VPN authentication is based on the user credential and a certificate protected by a password. The certificate is valid for one year. To revoke a certificate, connect to ufo and issue: sudo -u httpd /var/db/http/rsa.scripts/rsa_wrapper2.pl revoke <user name> The script rsa_wrapper2.pl can be used to create, show, show the expiration date of, revoke a certificate or renew the CRL (Certificate Revocation List). The CRL has a validity of 6 months and is nore renewed unless a user is revoked. A crontab job will renew the CRL on the 3rd of every four months, a 2:57. This shouls solve the error about CRL expired. Note that rsa_wrapper2.pl should be used and not rsa_wrapper.pl. It calls the script easyrsa.real.test that has been modified to work around a bug in OpenSSL. I have tried with the native OpenSSL (1.1.10) installed with FreeBSD 13 and with OpenSSL 3.2.0-dev and the bug is still there. I *think* it is the bug about EPREM that is detailed in "TTY_get() in crypto/ui/ui_openssl.c open_console() can also return er..." <https://github.com/openssl/openssl/commit/082394839ea32386abc7ee33aaa9da864287064c> To workaround the bug, I have modified earrsa.real to pass the CA passphrase with: echo "passphrase" | openssl ... -passin stdin instead of openssl ... -passin pass:passphrase Note that the passphrase is hardcoded in the script. Note that the workaround is VERY crude. Note about the bug: it would only show when the web page is created by Joomla: Joomla3->php wrapper->Perl script ->Perl script ->Bourne shell script->openssl php ->php ->connect-ifrane->rsa_wrappeer2.pl->easyrsa.real.test ->openssl and not when the page was created only by a Perl CGI script: Perl script->Perl script ->Bourne shell script->openssl connect ->rsa_wrapper.pl->easyrsa.real ->openssl I could not make a smaller example.

Posted by Olivier | Permanent link | File under: administration

Thu Sep 29 13:03:32 +07 2022

Some stuff related VMware and SSL certificates

Certificates are located in /etc/vmware/ssl rui.key is the private key mod 400 rui.crt is the certificate + the CA mod 644 The private key is installed once for all The certificate is generated/installed by ~on/letsencrypt/install_cert_virtual After installing the new certificates, restart the management agents: /etc/init.d/hostd restart /etc/initt.d/vpxa restart If that does not work, try to restart the agent manually from the console. Or try the command dcui from an ssh connection. I have not tried the solutions below. See also https://www.nakivo.com/blog/how-to-restart-management-agents-on-a-vmware-esxi-host/ 4. Use this command as an alternative, to restart all management agents on the ESXi host. services.sh restart &tail -f /var/log/jumpstart-stdout.log The progress of the VMware agents restart is displayed in the console output. VMware restart management agents with services.sh 5. You can also try to reset the management network on a VMkernel interface: esxcli network ip interface set -e false -i vmk0; esxcli network ip interface set -e true -i vmk0 The vmk0 interface is used by default on ESXi. If you have a different name for the management network interface, use the appropriate interface name in the command. This complex command consists of two basic commands separated by ; (semicolon). The vmk0 management network interface is disabled by the first part of the command. When this part is executed successfully and vmk0 is down, then the second part of the command is executed to enable the vmk0 interface. As a result, the ESXi management network interface is restarted. The authorized_keys file for root is in /etc/ssh/keys_root/authorized_keys It can be used from ufo sudo ssh -i /root/.ssh/id_rsa_virtual root@virtualX Crontab for root is in /var/spool/cron/crontab/root

Posted by Olivier | Permanent link