Friday, August 11, 2017

Using tinc VPN for linking up a Linux data cluster with static IPv6 addresses

10 years ago, I wrote up this tutorial on how to use tinc to create a basic IPv6 network between multiple sites. I followed up on that work with a more robust network: that is documented on my own website, and frankly could use some updating. At least one person made a cleaned up variant.

Over the past four years since leaving the employer I built that network up at, the acceptance and us of either tinc, and/or IPv6; has been rather mixed. However, I think I've found a use case for this past research: data clusters. In fact, another person (and another person as well) seems to have had a similar thought for his own needs. It also helps that most applications have full IPv6 support now: something that was barely in place 10 years ago.

Installation

CentOS/RHEL 7.x: yum install tinc ; you'll need to edit iptables & ip6tables files /etc/sysconfig , or use a firewalld command, to allow tcp and udp on port 655.

Debian/Ubuntu: grab the latest tinc-1.0.x deb from somewhere like Ubuntu Packages. It's also possible an apt install tinc will get you a new-enough version. You'll also need to run ufw allow 655/tcp && ufw allow 655/udp if you're using Uncomplicated Firewall.

Configuration


We're going to call this "privatelan", and assume you're using sudo / root to do this work. For editing, make sure you have nano, vi, etc. installed.
  1. mkdir -p -m=0700 /etc/tinc/privatelan/hosts
  2. In the past, you needed to edit /etc/tinc/nets.boot : but if you're using systemd, go ahead and delete it. Otherwise, edit it to have privatelan as a line in it.
  3. If ls /etc/init.d/tinc*finds a file, and you're using systemd, you'll need the systemd scripts. Download both .service files and copy those to /etc/systemd/system .
  4. You'll need to pick out / assign a ULA for this. You can use a ULA generator to pick a /64.
  5. Create and edit the files for /etc/tinc/privatelan/tinc.conf/etc/tinc/privatelan/tinc-up , and /etc/tinc/privatelan/tinc-down .
  6. chmod u+x /etc/tinc/privatelan/tinc-*
  7. tincd -n privatelan -K will generate a private key, and create a hostfile in /etc/tinc/privatelan/hosts. Optionally, you can edit that new hostfile with Compression=# (number being 1-11, with 1-9 being gzip, and 10-11 LZO).
  8. The master node needs a copy of the host file from every member system: those get copied to /etc/tinc/privatelan/hosts.
  9. All client nodes need copies of the master node host file. That file has to have something like Address=(DNS/IP) in the top of the file to be able to find that host with.
  10. When a node is ready, you can systemctl enable tinc@privatelan && systemctl restart tinc@privatelan ; check your distro if not using systemd.
  11. You should either add your new IPv6 addresses to your local DNS; or populate them as a batch in /etc/hosts.
  12. When in doubt, read the docs! Thanks!
  13. Added: on CentOS / RHEL systems, you may need to make sure SELinux isn't blocking anything ,by doing an audit2allow -a

Config files


tinc.conf

Name=node
Device=/dev/net/tun
Interface=privatelan
Mode=switch
#Comment next line if master
ConnectTo=master

tinc-up

#!/bin/sh
ip link set privatelan mtu 1280 qlen 4096 up
ip -6 address add 2001:db8:beef::1::1/64 dev privatelan

tinc-down

#!/bin/sh
ip -6 address del 2001:db8:beef::1::1/64 dev privatelan
ip link set privatelan down

hosts/node

Address=192.0.2.200
Compression=10

===SSHkey===

Tuesday, August 1, 2017

Accessing & upgrading a Debian Bitnami VM

So I stumbled across Bitnami recently. It's nice to be able to download ready-to-go VMs for different pieces of software. One I tried was a VMWare OVA that used Debian 8 as its base. But the current release of Debian is 9.1, and there wasn't an immediate issue involved in upgrading (MySQL -> MariaDB compatibility is an issue for some apps).

I. I needed SSH access. After loading the VM, I was able to enable SSH by following the Bitnami instructions for doing so. They add an extra /etc/ssh/sshd_not_to_be_run file to keep SSHD disabled, even after enabling the service.

II. I modified instructions from this source and another tool to make something work for me. Logged in as SSH under the bitnami user; for the last step, keep existing files if asked...

  1. sudo su -
  2. cp /etc/apt/sources.list /etc/apt/sources.list_backup
  3. apt install nano deborphan
  4. wget https://launchpad.net/~utappia/+archive/ubuntu/stable/+files/ucaresystem-core_3.0-1+xenial2_all.deb
  5. dpkg -i ucaresystem-core_3.0-1+xenial2_all.deb
  6. sed -i 's/jessie/stretch/g' /etc/apt/sources.list
  7. apt update && ucaresystem-core

III. Bitnami used the Extlinux bootloader for the VM I had; so I had to manually edit it to accept the newer kernel. nano /extlinux.conf ; change the kernel to /vmlinuz and change the initrd= to use /initrd.img as the target. For the end of the "append" line, add scsi_mod.use_blk_mq=y dm_mod.use_blk_mq=y or elevator=noop per what VMWare and others have suggested of late.

IV. Of course you should reboot the VM.

You should be able to modify this process to upgrade other Debian & Ubuntu VMs; just be wary of how things work on different versions (especially if you're trying to hop from something non-system-friendly).

Errata

Since you'd be using kernel 4.9 or better, give this a whirl in your /etc/sysctl.conf

net.ipv4.tcp_congestion_control=bbr
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1
net.core.somaxconn=1024
net.core.netdev_max_backlog=2048
fs.file-max=1000000
net.core.bpf_jit_enable=1


Updated Aug 2, 2017 to include extlinux.conf changes + use of TCP_BBR.

Friday, March 17, 2017

Weird missing drives error on a not-that-old laptop

I was working on a ASUS "Republic of Gamers" laptop for a coworker the other night. An otherwise decent piece of hardware was being operated off a 5400 RPM "quiet" hard drive: so I migrated the data over to a spare 500GB SSD using Parted Magic, and moved the old drive to the second bay. Pretty straightforward so far.

I probably spent the next hour trying to figure out why neither hard drive was coming up as a boot option. Disabling Secure Boot, re-enabling, togging the CSM, trying to do Startup Repair with the Windows 10 USB drive the system could detect, a BIOS/UEFI update... It turns out the BIOS/UEFI was detecting partitions, not physical drives; and I had partitioned the drives as GPT, not MSDOS format. Using Parted Magic again (gdisk and fdisk, specifically), I converted the partition tables back to MSDOS format, and then attempted to fix the Windows startup. I used instructions similar to these for getting things going again: boosect.exe /nt60 all was the magic command in the Windows 10 recovery command prompt.

Newer systems and laptops should be just fine with GPT, but this was interesting to me that the boot order of a UEFI system, was not detecting GPT-enabled drives.

Wednesday, March 8, 2017

De-duplicating XML validator in C#

Thought I needed XML validation for a project I'm working on. Wanted to be able to merge together several different stylesheets to check against. Also kept running into an error of Wildcard '##any' allows element

Overall, this seems to kinda work, but I may or may not need it. But maybe someone else can get it to work better. Original fix for duplicates I found

Needs System.Xml.Schema & System.Collections.Generic

public static XmlSchemaSet MergeSchemaFiles(string[] schemaFiles)
{
 // Get List of Schemas
 var schemas = new List();
 int sfi = 0;
 foreach (var sf in schemaFiles) {
  var tempFileXSS = new XmlSchemaSet();
  tempFileXSS.Add(null,sf);
  schemas.Add(tempFileXSS);
  schemas[sfi].CompilationSettings.EnableUpaCheck = false;
  schemas[sfi].Compile();
  Console.WriteLine("Loading schema from: " + sf + ", with " + schemas[sfi].GlobalElements.Values.Count + " elements.");
  sfi++;
    }
    // Merge schemas into one schema set: avoid duplicates
    var tempXSS = new XmlSchemaSet();  
    tempXSS.Add(schemas[0]);  
    for (int i = 1; i < schemas.Count; i++) {
  foreach (XmlSchemaElement xse0 in schemas[0].GlobalElements.Values) {
      foreach (XmlSchemaElement xseI in schemas[i].GlobalElements.Values) {
    if (xseI.QualifiedName.Equals(xse0.QualifiedName)) {  
       ((XmlSchema)xseI.Parent).Items.Remove(xseI);  
       break;
    }
      }
  }
  foreach (XmlSchema schema in schemas[i].Schemas()) {
      schemas[i].Reprocess(schema);
  }
  schemas[i].Compile();
  tempXSS.Add(schemas[i]);
    }
    // Return results
    Console.WriteLine("Retained " + schemas.Count + " XML schemas");  
    return tempXSS;
}

Thursday, February 9, 2017

C# snippet on easy multi-threading / parallel processing

There's been a need of mine lately to figure out how to successfully execute parallel programming tasks in C#. Finally figured out a basic way to do this, that uses current versions of .NET. The Threading in C# website was key to this.

using System.Threading.Tasks;
// The "i" is an iterator you can refer to in the body
Parallel.ForEach(array, (c, state, i) => {
class.function(parameters);
});
On an unrelated note: if you have to parse XML for some reason, XElement seems to be much easier to use than the older libraries.

Added: someone else wrote a good comparison of different threading methods in .NET.

Friday, January 29, 2016

Multi-hosted Nginx on RHEL 7 / CentOS 7 with PHP support

I learned recently that one can construct a pretty effective multi-hosted Nginx + PHP server on CentOS. There are a number of obstacles to deal with, but once handled, the results are promising. This configuration handles a basic wildcard / multi-site setup: I imagine you could could expand it to support HTTPS via Let's Encrypt or wildcard domain certs via additional Nginx config files.

This setup in particular solves two problems: a "wildcard" Nginx setup; and the ability to easily create and delete hosted websites. This draws upon some references, older blog posts, and stuff done at the office. Assume any commands require sudo/root and an SSH terminal to complete.

1. Setup your CentOS VM as you normally would for your environment. If installing FirewallD, be sure HTTP / HTTPS services are open.
2. yum install epel-release pwgen zip unzip bzip2 policycoreutils-python -y : ensure some basic essentials are loaded. Also make sure your favorite editor (vim / nano / whatever) is installed.
3. Install Nginx from their repo.
4. Install PHP 5.6 (5.5 for older stuff) from the Webtatic repo. One deviation: don't run yum install php56w php56w-opcache ; instead, run yum install php56w-cli php56w-opcache php56w-fpm -y for your base install (the original command loads Apache). Don't forget to load any additional PHP modules.
5. Edit /etc/php.ini : set date.timezone to a value per the timezone list, and set upload_max_filesize to a larger value if you're going to be allowing file uploads.
6. Edit /etc/php-fpm.d/www.conf : change listen.owner and listen.group to nginx ; listen.mode = 0666user and group to nginx ; pm = dynamic to pm = ondemand ; and set security.limit_extensions to allow .php .htm .html if you're going to run any PHP-in-HTML code.
7. Edit /etc/security/limits.d/custom.conf : add * soft nofile 8192 and * hard nofile 8192 to it.
8. Add the following to the end of /etc/ssh/sshd_config

Match Group nginx
        ChrootDirectory %h
        ForceCommand internal-sftp
        AllowTcpForwarding no
9. Clear out / set-aside the conf files in /etc/nginx/conf.d
10. mkdir /web && mkdir /web/scripts && mkdir /web/sites
11. systemctl enable php-fpm nginx
12. Create all the specified configuration files, then reboot your VM.
13. Start building out your sites using the "create_site" script for each one. At some point, you're going to run into SELinux permissions issues: try the following to mitigate them (you may have to do this twice to identify all the correct policies)...

cd ~
rm php.pp
rm nginx.pp
grep php /var/log/audit/audit.log | audit2allow -M php
semodule -i php.pp
grep nginx /var/log/audit/audit.log | audit2allow -M nginx
semodule -i nginx.pp


Configuration files

/etc/sysctl.d/custom.conf


net.ipv4.tcp_congestion_control=illinois
net.ipv4.tcp_tw_recycle=1
net.ipv4.tcp_tw_reuse=1
net.core.somaxconn=1024
net.core.netdev_max_backlog=2048
fs.file-max=1000000
net.core.bpf_jit_enable=1
vm.swappiness=1

/etc/nginx/conf.d/servername.conf


server {
    listen       80;
    server_name  _;
    set $site_root /web/sites/$host/public_html;
    charset utf8;
    #access_log  /var/log/nginx/log/host.access.log  main;
    location / {
        root $site_root;
        index index.php index.html index.htm;
    }
    # redirect server error pages ; customize as needed
    #
    error_page  404              /404.html;
    error_page  500 502 503 504  /50x.html;
    location = /404.html {
        root   /usr/share/nginx/html;
    }
    location = /50x.html {
        root   /usr/share/nginx/html;
    }
    # pass PHP scripts to FastCGI server
    location ~ \.php$ {
        root           $site_root;
        try_files $uri =404;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.php;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }
    # pass HTML scripts to FastCGI server (legacy code)
    # PHP-FPM config also had to be updated to allow this
    location ~ \.html$ {
        root           $site_root;
        try_files $uri =404;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.html;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }
    location ~ \.htm$ {
        root           $site_root;
        try_files $uri =404;
        fastcgi_pass   127.0.0.1:9000;
        fastcgi_index  index.htm;
        fastcgi_param  SCRIPT_FILENAME  $document_root$fastcgi_script_name;
        include        fastcgi_params;
    }
    # === Compression ===
    gzip on;
    gzip_http_version 1.1;
    gzip_vary on;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript text/x-js;
    gzip_buffers 16 8k;
    gzip_disable "MSIE [1-6]\.(?!.*SV1)";
    gunzip on;
}

/etc/nginx/conf.d/redirects.conf


# Probably could wildcard this somehow. Uncomment and use as needed.
#server {
#    server_name www.example.com;
#    return 301 $scheme://example.com$request_uri;
#}

/web/scripts/create_site (chmod 700)


#!/bin/bash
#Create a user; uncomment next two lines for password generation
useradd -s /sbin/nologin -g nginx -d /web/sites/$1 $1
#NEWPASSWORD=`pwgen -s 16 1`
#echo $NEWPASSWORD | passwd --stdin $1
#Build
chmod 755 /web/sites/$1
mkdir -p /web/sites/$1/public_html
chmod 755 /web/sites/$1/public_html
mkdir -p /web/sites/$1/private
chmod 700 /web/sites/$1/private
#Copy default files to show the site works
#(add your logic here: suggest adding robots.txt or humans.txt)
#Reset permissions as needed
chown -R $1:nginx /web/sites/$1
chown root:root /web/sites/$1
#Ensure SELinux access
restorecon -Rv /web/sites/$1
#Show new username + password (uncomment next line)
#echo "$1,$NEWPASSWORD"

/web/scripts/delete_site (chmod 700)


#!/bin/bash
userdel -r $1
rm -fr /web/sites/$1
echo $1 "deleted"

References



Additional References


Thursday, January 14, 2016

OpenVPN access on Fedora / CentOS / RHEL

SELinux and Avahi conspire to make one's use of OpenVPN on a Redhat-based Linux to be rather unpleasant. Here's how you can go about resolving that.

  • Extract any cert files from the OVPN file you received, and save them as separate files in a directory intended for said purpose.
  • The next three commands require sudo / root user...
  • semanage fcontext -a -t home_cert_t (path to certificate file) for each cert.
  • restorecon -Rv (path of certs/*) to load the new security contexts.
  • yum remove avahi if you use a ".local" or other non-standard domain name internally. A safer option is to use systemctl disable avahi-daemon.socket avahi-daemon.service in case you need to flip it back on later.
  • Import the OVPN file to the Network Manager, and configure to use the cert files + login username + password ("password w/certificates" option).