pyratelog

personal blog
git clone git://git.pyratebeard.net/pyratelog.git
Log | Files | Refs | README

commit 511dea9e5056cdf0cff54d812761044cd602c528
parent ef66a3040e1858f6b629c33e593eaff81fd990ef
Author: pyratebeard <root@pyratebeard.net>
Date:   Fri, 11 Feb 2022 11:12:20 +0000

Merge branch 'main' into rnd.mov.rec

Diffstat:
Aentry/20220124-make_believe.md | 156+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aentry/20220127-multi_lxc_with_haproxy.md | 181+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Aentry/20220130-podracing.md | 11+++++++++++
Aentry/20220202-ansible_reachable_hosts.md | 34++++++++++++++++++++++++++++++++++
Aentry/20220202-the_cookie_crumbles.md | 5+++++
Aentry/20220208-git_savvy.md | 14++++++++++++++
Aentry/20220210-rice_of_the_machines.md | 31+++++++++++++++++++++++++++++++
Mpyratelog.sh | 2+-
Mstyle.css | 6++++++
9 files changed, 439 insertions(+), 1 deletion(-)

diff --git a/entry/20220124-make_believe.md b/entry/20220124-make_believe.md @@ -0,0 +1,156 @@ +A while I ago I saw [this tweet](https://twitter.com/silascutler/status/1353385026994511872?s=19) by @silascutler who was using `make` to run `docker` commands. + +I was intrigued so I made a (very) similar Makefile and tried it when using Docker +``` +override TAG = $(shell basename $$PWD) +VER := latest +IMAGE_ID = $(shell eval sudo docker images -qa ${TAG} | head -n 1) + +.PHONY build test buildtest deploy + +build: + sudo docker build -t ${TAG}:${VER} . + +test: + sudo docker run -d ${TAG}:${VER} + +buildtest: build test + +deploy: + @echo ${IMAGE_ID} + sudo docker tag ${IMAGE_ID} pyratebeard/${TAG}:${VER} + sudo docker push pyratebeard/${TAG}:${VER} +``` + +This Makefile will use the directory name as the container tag if not specified. I did it that way so I can have one Makefile and symlink it into every project directory. The version is set to "latest" unless declared with the command +``` +make build VER=1.0 +``` + +The rest is pretty straight forward. A new container built from the current directory can be created and started if you incant +``` +make build +make test +``` + +Or you can perform both actions at the same time by incanting +``` +make buildtest +``` + +Pushing your image to your remote repo is as easy as +``` +make deploy +``` + +## earth shaping in the cloud + +After using this for a while I thought it was working well so I decided to try it with a few other tools. + +I wrote this Makefile for use with `terraform` +``` +NAME := test +VARS := terraform + +.PHONY init plan apply planapply refresh destroy clean + +init: + terraform init + +plan: init + terraform plan -var-file=${VARS}.tfvars -out ${NAME}.tfplan + +apply: + terraform apply -auto-approve -state-out=${NAME}.tfstate ${NAME}.tfplan + +planapply: init plan apply + +refresh: + terraform refresh -state=${NAME}.tfstate + +destroy: refresh + terraform destroy -var-file=${VARS}.tfvars -state=${NAME}.tfstate + +clean: destroy + rm -f ${NAME}.tf{plan,state{,.backup}} +``` + +Using this Makefile I can init, plan, and apply a terraform state with one command. I can also manage multiple plans in the same directory. + +To create a "test" plan you incant +``` +make plan +``` + +This will produce `test.tfplan`. You can then apply this plan by incanting +``` +make apply +``` + +If you then wanted to use the same variables file (terraform.tfvars) to create another plan and apply it without losing `test.tfplan` you can incant +``` +make planapply NAME=newtest +``` + +Coming back later you can destroy everything from `newtest.tfplan` if you incant +``` +make destroy NAME=newtest +``` + +This will leave the `newplan.tfstate` file if you wanted to re-apply, or use `make clean` to delete everything. + +## as you drist + +Then I got more adventurous and decided to write a Makefile for use with my `drist` modules (if you're not sure what `drist` is you can read [my post](https://log.pyratebeard.net/entry/20210305-the_usefulness_of_drist.html) about it) +``` +SERVER := inventory +FILESDIR = files + +.PHONY patch pkgs create_user ssh_keys new_server sshd_config fail2ban dots secure commission + +env-%: + cd $* ; if [ ! -d ${FILESDIR} ] ; then mkdir ${FILESDIR} ; fi + cp env $*/${FILESDIR} + +patch: + cd patch ; drist ${SERVER} + +pkgs: + cd packages ; drist ${SERVER} + +create_user: env-create_user + cd create_user ; drist ${SERVER} + +ssh_keys: + cd ssh_keys ; drist ${SERVER} + +new_server: env-ssh_keys + cd create_user ; drist ${SERVER} + cd ssh_keys ; drist ${SERVER} + +sshd_config: env-sshd_config + cd sshd_config ; drist ${SERVER} + +fail2ban: + cd fail2ban ; drist ${SERVER} + +dots: + cd deploy_dots ; drist ${SERVER} + +secure: sshd_config fail2ban + +commission: new_server patch pkgs secure dots +``` + +There seems like a lot there but it should be fairly easy to figure out. I normally run this when I have built a new server and want to "commission" it with my settings +``` +make commission SERVER=newhost +``` + +This will create a user and upload my ssh public keys, update the system (patch) and install a set of packages which I always want to have. It will then set a preconfigured sshd config file, install and configure fail2ban, and deploy my user configurations (dotfiles). + +## meet the maker + +Using `make` like this will probably make a lot of people shudder. I don't use it for everything but after trying the above I found writing a simple Makefile was slightly quicker than writing a wrapper script, and it's another way for me to confuse coworkers that like buttons over text. + +If you like this idea I would be interested to see what other tools people use Makefiles for. If you think the above can be improved let me know or raise a [Gitlab merge request](https://gitlab.com/pyratebeard/makefiles). diff --git a/entry/20220127-multi_lxc_with_haproxy.md b/entry/20220127-multi_lxc_with_haproxy.md @@ -0,0 +1,181 @@ +Near the beginning of last year I hit a few issues with some of my Docker containers and part of my CI/CD pipeline. Around the same time I seemed to be reading more about LXC, and a few people on IRC mentioned that it was worth learning. I decided to take a step back from Docker and give LXC a go. + +## what the chroot +LXC or Linux Containers, is a virtualisation method allowing the kernel to be used between multiple environments or containers. While traditionally with Docker you would run single applications inside a container then network them together (web server, database, etc.) LXC gives you a "full" Linux system but unlike a virtual machine it shares the same kernel as the host. + +There are pros and cons to LXC but I don't want to get into that in this post. If you would like to know more about LXC check out the [official website](https://linuxcontainers.org). I should also point out that I have stuck with LXC and not LXD, which is a next generation container manager. + +Setting up LXC is straightforward by following the [official guide](https://linuxcontainers.org/lxc/getting-started/). + +Creating a container is as easy as +``` +lxc-create -t download -n <name> +``` + +selecting an image from the list shown. + +Or if you know the image you want to use you can specify it +``` +lxc-create -n <name> -t download -- --dist <distro> --release <release_number> --arch <architecture> +``` + +After I created my container I started it and set it up as I would any other system. This then became my "base image". Any new container I wanted could be cloned from this so it is already set up. I renew the base image periodically with updates etc. + +Cloning a container can be done by incanting +``` +lxc-copy -n ${BASE} -N ${NEW} +``` + +This command is _suppose_ to change the hostname of the cloned container but I found it didn't. To remedy that incant +``` +sudo sed -i "s/${BASE}/${NEW}/" ${HOME}/.local/share/lxc/${NEW}/rootfs/etc/hostname +``` + +## virtualise all the things +I was using Docker to run a number of things on a single VPS, using an Nginx container as a proxy. + +For no particular reason, with LXC I opted for HAProxy. My HAProxy is running in a container. On the host server I set the following firewall rules to send traffic to the HAProxy container +``` +iptables -t nat -I PREROUTING \ + -i ${INTERFACE} \ + -p TCP \ + -d ${PUBLIC_IP_ADDRESS}/${CIDR} \ + --dport 80 \ + -j DNAT \ + --to-destination ${HAPROXY_CONTAINER_IP}:80 + +iptables -t nat -I PREROUTING \ + -i ${INTERFACE} \ + -p TCP \ + -d ${PUBLIC_IP_ADDRESS}/${CIDR} \ + --dport 443 \ + -j DNAT \ + --to-destination ${HAPROXY_CONTAINER_IP}:443 +``` + +Then I could login to HAProxy container to configure it. The config file may be either /etc/haproxy.cfg or /etc/haproxy/haproxy.cfg, on my container it is the latter. + +Of course I want to use SSL and it is advised to set the Diffie-Hellman parameter to 2048 bits instead of the default 1024. I included the following to the `global` section of haproxy.cfg +``` +tune.ssl.default-dh-param 2048 +``` + +I am using LetsEncrypt for my SSL certificates, so I installed `certbot`. This will be used later to generate our SSL certificates. One of the best solutions I found for LetsEncrypt with HAProxy is from [janeczku](https://github.com/janeczku/haproxy-acme-validation-plugin) on Github. I put a copy of the `acme-http01-webroot.lua` script into /etc/haproxy/ and added the following to the `global` section of haproxy.cfg + +``` +lua-load /etc/haproxy/acme-http01-webroot.lua +``` + +To tell HAProxy to use SSL I had to configure a couple of `frontends` after the `default` section +``` +frontend http_frontend + bind *:80 + + acl url_acme_http01 path_beg /.well-known/acme-challenge/ + http-request use-service lua.acme-http01 if METH_GET url_acme_http01 + + redirect scheme https + +frontend https_frontend + bind *:443 +``` + +This config will redirect HTTP traffic on port 80 to HTTPS on 443. + +Now I can declare a `backend` and `acl` to route traffic. For the sake of example my LXC container is called "pyratelog" and the domain I am pointing to is "log.pyratebeard.net". + +The `acl` is declared in the `https_frontend` section +``` +frontend https_frontend + bind *:443 + + acl pyratelog hdr(host) -i log.pyratebeard.net + use_backend pyratelog if pyratelog +``` + +Then beneath the `frontend` the `backend` section is configured +``` +backend pyratelog + balance leastconn + http-request set-header X-Client-IP %[src] + server pyratelog pyratelog:80 check +``` + +LXC has built in container name resolution, so you can use the name of the container instead of its IP address. + +A reload of HAProxy picks up the changes. + +I used `certbot to request a new SSL cert +``` +certbot certonly --text \ + --webroot --webroot-path /var/lib/haproxy \ + -d log.pyratebeard.net \ + --renew-by-default \ + --agree-tos \ + --email me@email.com +``` + +This created two PEM files, a private key and a chain file. I combined these into one file to be read by HAProxy +``` +cat /etc/letsencrypt/live/log.pyratebeard.net/privkey.pem \ + /etc/letsencrypt/live/log.pyratebeard.net/fullchain.pem \ + | tee /etc/letsencrypt/live/pem/pyratelog.pem +``` + +Now I had to alter the `https_frontend` section to point to the SSL cert directory +``` +frontend https_frontend + bind *:443 ssl crt /etc/letsencrypt/live/pem/ +``` + +and reloaded HAProxy. + +When I added another LXC container behind HAProxy I simply add a new `backend` and include an `acl` in the `https_frontend`, so it would looks something like this +``` +frontend https_frontend + bind *:443 + + acl pyratelog hdr(host) -i log.pyratebeard.net + use_backend pyratelog if pyratelog + + acl pyrateweb hdr(host) -i pyratebeard.net + use_backend pyrateweb if pyrateweb + +backend pyratelog + balance leastconn + http-request set-header X-Client-IP %[src] + server pyratelog pyratelog:80 check + +backend pyrateweb + balance leastconn + http-request set-header X-Client-IP %[src] + server pyrateweb pyrateweb:80 check +``` + +Then I ran the `certbot` command again, and combine the PEM files +``` +certbot certonly --text \ + --webroot --webroot-path /var/lib/haproxy \ + -d pyratebeard.net \ + --renew-by-default \ + --agree-tos \ + --email me@email.com + +cat /etc/letsencrypt/live/pyratebeard.net/privkey.pem \ + /etc/letsencrypt/live/pyratebeard.net/fullchain.pem \ + | tee /etc/letsencrypt/live/pem/pyrateweb.pem + +``` + +A reload of HAProxy picks up the changes. + +From now on renewing an SSL cert is done by incanting +``` +sudo certbot certonly --text --webroot --webroot-path /var/lib/haproxy -d log.pyratebeard.net +``` + +then combine the PEM files again, overwriting the previous file, and reloading HAProxy. + +I was happy with how easy it was to get LXC running with HAProxy, and now comfortably run a number of containers on a single host. + +Docker hasn't completely been removed from my systems, depending on the use case I do lean towards LXC a bit more these days. I have been running my LXC setup for over a year and have had no issues. The "CI/CD" has had to change though, and I will cover how I publish these blog posts onto my LXC container in a later post. diff --git a/entry/20220130-podracing.md b/entry/20220130-podracing.md @@ -0,0 +1,11 @@ +Over the last year I have started listening to podcasts on a more regular basis. Most of the time I have about 20 to 30 minutes while I am doing a chore or some other task. For a long time I was listening to [Cory Doctorow's reading](https://craphound.com/hackercrackdown.xml) of [The Hacker Crackdown](https://www.gutenberg.org/ebooks/101) by Bruce Sterling, but felt like it was taking me too long to get through the episodes 20 minutes at a time. + +I decided to start speeding up the podcast so I could get through more. First I sped up 1.10x to see what it was like but very quickly moved to 1.25x. I got so accustomed to listening at 1.25x that when I put an episode on a different device I thought Cory was talking really slow. + +As I grew use to the speed I started listening to all my podcasts at 1.25x. After a while I increased to 1.30x, then 1.40x. Now I can listen to almost all at 1.50x speed. This has really changed my podcast listening and I don't feel like I am falling behind so much. + +On occasion the speed has to be dropped down again if somebody is talking quite fast at normal speed but I am surprised at how quickly I have become use to it. + +I have started trying this for videos as well. Some vlogs I subscribe to can be sped up to about 1.35x at the moment. I think visual at speed is harder than audio. I have even managed to watch a few movies at 1.25x. + +Now I need to work on improving my speed reading... diff --git a/entry/20220202-ansible_reachable_hosts.md b/entry/20220202-ansible_reachable_hosts.md @@ -0,0 +1,34 @@ +I use AWX for Ansible in work quite a bit. We have a number of workflows that will run on multiple hosts. One issue we had was that some systems may be offline when the templates in the workflow run, and this would result in a template (and ultimately a workflow) failure even though all the other systems were successful. + +Stackoverflow to the rescue! Thanks to Alex Cohen for [this solution](https://stackoverflow.com/a/55190002). + +To combat the offline hosts the playbook can be modified to perform a check on the inventory first, loading any online systems into a "reachable" group. The rest of the playbook would only be run against the online systems. + +``` +--- +- hosts: all + gather_facts: false + tasks: + - block: + - wait_for_connection: + timeout: 5 + - group_by: + key: "reachable" + tags: always + check_mode: false + rescue: + - debug: + msg: "unable to connect to {{ inventory_hostname }}" + +- hosts: reachable + tasks: + - name: normal playbook tasks from here + ... +``` + +As you can see this is achieved using the `block` feature in Ansible. The `key` parameter in the `group_by` module specifies the name of our ad-hoc inventory group, in this case "reachable". + +Using the `debug` module message in `rescue` allows us to mark the offline systems as rescued so it doesn't fail the playbook. Then the rest of the playbook is run against all systems in the "reachable" group. + +This now doesn't cause my workflows to fail and I don't have to explain why I'm not concerned when there is red on the dashboard (¬_¬). + diff --git a/entry/20220202-the_cookie_crumbles.md b/entry/20220202-the_cookie_crumbles.md @@ -0,0 +1,5 @@ +I am [not a fan](https://log.pyratebeard.net/entry/20220108-type_cookie_you_idiot.html) of the abusive use of cookies and the despicable consent pop-ups. So I am delighted to read that a decision has been reached by the Belgian Data Protection Authority that online advertising trade body IAB Europe's consent pop-ups are unlawful, and they must delete all user data for violating GDPR. + +You can read more about this on the [Irish Council for Civil Liberties](https://www.iccl.ie/news/gdpr-enforcer-rules-that-iab-europes-consent-popups-are-unlawful/) website, and here is the [PDF of the full decision](https://www.gegevensbeschermingsautoriteit.be/publications/beslissing-ten-gronde-nr.-21-2022-english.pdf). + +This decision will impact hundreds of companies that use this data, and looks to be another step forward in the battle against the obnoxious plague of cookies and pop-ups. diff --git a/entry/20220208-git_savvy.md b/entry/20220208-git_savvy.md @@ -0,0 +1,14 @@ +Last year I volunteered to give a talk at one of the [Dublin Linux Community](https://dublinlinux.org)'s online meetups. I decided to give a talk on [git](https://git-scm.com/), the version control system, and inspired by xero's [grok git](https://git.io/grokgit) I made the presentation a shell script. + +To get a copy of the presentation script in your terminal incant +``` +curl -L -o gitsavvy rum.sh/gitsavvy +sh ./gitsavvy +``` + +or if you trust me pipe directly into a shell +``` +curl -sL rum.sh/gitsavvy | sh +``` + +The recording of the presentation is on my [peertube instance](https://tube.pyratebeard.net/videos/watch/270b2ffe-4918-46c5-915c-f76dbe998593). diff --git a/entry/20220210-rice_of_the_machines.md b/entry/20220210-rice_of_the_machines.md @@ -0,0 +1,31 @@ +The first Desktop Environment (DE) I ever used on a Linux distro was KDE (circa 2008). Coming from Windows I was amazed at how much you could customise the theme and style of the windows, even back then I preferred dark mode everything. I started to play around with themes and colours, and this customising continued through my use of Gnome 2 and then XFCE. By this point I had started to notice screenshots of Linux desktops online, leading me eventually to [/r/unixporn](https://reddit.com/r/unixporn) and the term "ricing". + +According to the [/r/unixporn wiki](https://www.reddit.com/r/unixporn/wiki/themeing/dictionary/#wiki_rice) + +> "Rice" is a word that is commonly used to refer to making visual improvements and customizations on one's desktop. It was inherited from the practice of customizing cheap Asian import cars to make them appear to be faster than they actually were - which was also known as "ricing" + +I was jubilant to find a whole community of people who enjoyed customising their systems. Through /r/unixporn I was introduced to new techniques, tools, software, and other communities. + +My ricing became more deliberate. I started to see ricing as an art form, taking inspiration from others, pictures, or even single colours. I moved away from DEs and began using a Window Manager (WM). I also paid more attention to the software I used and tried to customise everything I could. I also began to [share my ricing screenshots](https://www.reddit.com/r/unixporn/search?q=author%3Apyratebeard). + +A few /r/unixporn ricers (xero, venam, z3bra) lead me to the [nixers](https://nixers.net) community. While not strictly a "ricing" community there are a number of members who rice. Recently it was decided to showcase some of [our screenshots](https://screenshots.nixers.net). + +I see a lot of posts on /r/unixporn and [/r/unixart](https://reddit.com/r/unixart) with ricers saying "this is it, my finished rice", and I always chuckled at this. I was always playing around with my colours and styles, forever tweaking. There is no end... I thought. + +I haven't made any changes to my current theme in a long time. It was a lot of tweaking to get it how I like it, but the style, colours, usability, have all worked so well that I haven't needed to adjust it. I like this rice so much I entered it into the August 2021 /r/unixporn ricing competition and took [second place](https://www.reddit.com/r/unixporn/comments/pnkbev/august_ricing_competition_winners/) (thanks to all who voted!). + +This isn't to say I have finished ricing, I have a few ideas for new projects, but have been so happy with what I have now that maybe "this is it, my finished rice". Somebody will probably chuckle at the naivety of that statement. + +If you want to get into ricing check out /r/unixporn and /r/unixart to get an idea of what is possible. [Dotshare](http://dotshare.it) is a great source for themes and their accompanying configuration files. I found [gpick](https://www.gpick.org) a great tool for finding colours and whole palettes from images. The website [terminal.sexy](https://terminal.sexy) is also useful to play around with the colours and see how they look together. + +Take your inspiration from anything. I have riced based on comic books, album covers, and even a single colour. + +Some of my favourite rice screenshots by others are: + +* [bbs lyfe](https://i.redd.it/vi7crebm52sx.png) by xero +* [hyper light drifter](https://i.redd.it/yqgwrh1ojebz.png) by novcn +* [creation](https://i.redd.it/t8p0l43zjvm01.png) by szorfein +* [muspelheim](https://i.redd.it/n5m7rap66ii71.png) by barbarossa +* [gvfr](https://u.teknik.io/qc8dI.png) by nxll + +A collection of my rice screenshots can be [found here](https://pyratebeard.net/scrots.html). diff --git a/pyratelog.sh b/pyratelog.sh @@ -21,7 +21,7 @@ find_md=$(find entry/ -type f -name "*.md" | sort) for md in ${find_md} ; do # get the title and date from the filename - input=$(echo ${md} | cut -f2 -d '/' | cut -f1 -d '.') + input=$(echo ${md} | cut -f2 -d '/' | rev | cut -f2- -d '.' | rev) # cut the date and turn into epoch time input_date=$(echo ${input} | cut -f1 -d '-' ) diff --git a/style.css b/style.css @@ -79,6 +79,12 @@ span b a:hover { border-bottom: 0px; } +span.citation { + color: #bbbbbb; + letter-spacing: 1.1px; + font-size: 1em; +} + @media (max-width: 767px) { header { padding-left: 20px;