Categories
Development Experiences Ubuntu WordPress

Migrating My Cloud Virtual Servers to SpinupWP

I’ve never been a fan of managed WordPress hosting. Don’t get me wrong; I appreciate what managed WordPress hosts do, and they are definitely doing amazing work, but it’s just not for me.

As a developer and server admin hobbyist, I prefer to own my own VPS (virtual private servers). I started using Rackspace Cloud VPS in 2008 and managed a few personal and client sites on virtual servers. When Digital Ocean launched, I moved all of my sites to Digital Ocean droplets. I’ve always enjoyed using VPS environments, as they give me complete control over my web hosting infrastructure.

The two downsides to this DIY approach is security and site management. Not being an experienced Linux sysadmin, I was always concerned that I might miss something important that led to my servers getting hacked. Setting up new sites on the same server meant manually creating site folders, databases, and virtual host configurations.

So when I discovered ServerPilot in 2016, it was almost a no brainer. It gave me the flexibility to own my virtual servers to create and manage new sites on the fly. It has some shortcomings, but I could find workarounds for them to get things the way I wanted.

Then, early in 2019, Delicious Brains launched SpinupWP.

I’ve been a fan and follower of the team at Delicious Brains for a few years now, and their take on the cloud-based server control panel was exciting to me. The features listed on the website provided a better feature set than the ServerPilot service. I had to try it out.

Unfortunately, time, work and life got in the way. It took me almost two years to finally give it a try, and now that I have, I’m annoyed I didn’t do it earlier.

What makes SpinupWP better?

There are quite a few reasons I like SpinupWP more; this is just a shortlist of my initial thoughts during the process of migrating my sites over.

1. API Access

SpinupWP connects to your cloud service provider via their API. This is great because it means you can actually spin up 😀 a server from the dashboard. With ServerPilot, you have to create the server from your VPS provider first, and you have to provide root user access, which is a bit of a security risk.

2. Easy setup wizards

Setting up a new server instance to creating a new site, you are guided through the process using step-by-step setup wizards. One of the things I like about these setup processes is they’ve thought of the different types of site migration and provide options like setting up a new site to simply provisioning an empty one to be migrated over.

3. Handy help sections

This is one of my favourite parts of the setup wizards. For almost every field you have to enter, there is a help area to the right of the view that describes what that field is for or includes more information or links to a help document. This is very handy when you’re not 100% sure of what you need to input.

4. Click to Copy

Every single piece of text (and I mean every single one) that you might need to copy to be pasted elsewhere can be copied to your clipboard via a single click. You have no idea how much time this saves when you migrate sites over to a new platform.

If I have one complaint here, the domain portion of the TXT record I needed to copy for the SSL certificate generation copied the entire text when I just needed the part before the domain name. That being said, this was the only hiccup I encountered using the product on my first try.

5. Event log

At the top right of the screen is a button to view an entire log of everything you’ve done on the platform. This is extremely useful if you need to see the status of some task running in the background.

6. Security first

Because SpinupWP creates new server instances for you, they automatically disable root access. Then, any time a new site is created, a new user is created to manage that site. These site users do not have sudo access, which means they can only manage the specific site’s files and data. You can create separate sudo users to give you a higher level of control. Finally, you can choose to access your servers using SSH public key authentication for an enhanced level of security.

7. Faster out of the box.

I moved my main domain (this blog) over to a new server first. I specifically chose PHP 7.3 as the PHP version because it was running on the older server. I don’t do any specific speed tuning, and just by migrating my site from a ServerPilot managed instance to SpinupWP, I gained an 11% speed increase on GTMetrix.

Before
After

I managed to move all 3 of my personal “production” domains over to a SpinupWP managed server over the weekend. I’ve not really had a chance to dive into the dashboard fully, but I’m looking forward to seeing what else is possible. I really appreciate the product because you can see it’s the result of years of working with different WordPress hosting types. It’s designed for developers to make it as straightforward as possible to leverage the power, and cost-saving, of cloud-based virtual private servers, with the speed and security of a managed solution.

If you’re a WordPress developer, and you have sites you host yourself, or you want to self manage your client’s web hosting but are concerned about ease of use, stability, and security, I highly recommend giving SpinupWP a spin.

Categories
Ubuntu

Disable the Touch pad on an Ubuntu Laptop When an External Mouse Is Connected.

Featured image by John Petalcurin from Pexels

One of my pet peeves when working on a laptop is the position of the touch pad, relevant to my right hand position. Probably due to my large hands, and the way I rest my palms when typing, the part of my palm that’s at the bottom of the thumb on my right hand typically always makes a connection with the touch pad, so when I’m doing lots of typing, either long form writing, or coding, I inevitably touch the touch pad with my hand, and the mouse cursor shoots all over the place. If I’m very unlucky, it might even copy/paste some text in the process.

For this reason, I always have an external mouse attached when using my laptop for anything other than browsing. And it means I need a way to ensure that the touch-pad is disabled when the external mouse it attached.

For the longest time I’ve been using the open source touchpad-indicator app, which has always just worked. Unfortunately it seems that since Ubuntu 20.04, the app does not work 100% on Ubuntu. It installs and automatically loads, but the icon doesn’t appear in the taskbar, and any attempts to configure it from the command line are unsuccessful.

While I can’t blame the developer of this app for not keeping things up to date, it does mean I have to find an alternative solution. Fortunately I stumbled across one, using the dconf-editor, on the AskUbuntu forums.

Installing dconf-editor is as easy as running:

sudo apt install dconf-editor

And then running dconf-editor from the command line.

The editor exposes MANY configuration settings, which, if you don’t know what you are doing, could quite easily bork your system. It does warn you about this, so you’ve been told!

Once opened, browse to the following settings:

/org/gnome/desktop/peripherals/touchpad/send-events

Under the Custom value setting for this item, change it from enabled to disabled-on-external-mouse

Close the editor, restart, and you should be good to go. I tested this with a Bluetooth mouse, and it works perfectly first time.

Categories
Freelancing Ubuntu

Setting Up a Matrix Server on Ubuntu 20.04 – Part 2

In part 1 of this tutorial, I dived into my reasons for setting up a Matrix Synapse homeserver, and how to set up the basics of the server and it’s required software. In part 2, I’m going to register myself an admin account, log into the server using the online chat client, to verify it’s all working as it should, and migrate the database engine from SQLite to Postgres.

A quick reminder, all server commands are run as the root user.

Register an admin user account

Installing the Matrix Synapse server software also installs a few executables on the server, that can be used for specific tasks. register_new_matrix_user is one such executable, and it allows you to register a new user from the command line of the server.

register_new_matrix_user -c /etc/matrix-synapse/homeserver.yaml http://localhost:8008

The process will ask you for a user name, password, and whether to make this user an admin or not.

Once you’ve created the user, you can log in to the homeserver via the Element web client hosted at https://app.element.io/#/login, or download Element for your operating system, and Sign in from the client.

Either way, to sign in to your server, click on the Change link, next to “Sign in to your Matrix account on matrix.org”.

Enter the server domain as the homeserver URL and click Next.

Enter your user name and password and click Sign in.

Switch Database Engine

While Matrix Synapse shops with SQLite by default, the official documentation suggests this only for testing purposes or for servers with light workloads. Otherwise, it’s recommended to switch to Postgres, as it provides significant performance improvements due to the superior threading and caching model, and smarter query optimiser, and allows the database to be run on separate hardware.

Install Postgres

First step is to install Postgres on the server.

apt install postgresql postgresql-contrib

We then need to create a user for synapse to use to access the database. We do this by switching to the postgres system user, and running the createuser command

su - postgres
createuser --pwprompt synapse_user

The createuser command will ask you for a password, and create the synapse_user with that password.

Now we can create the database, by logging into the Postgres database server, while operating as the postgres system user.

psql

Once logged in, we can create the database, and assign it to the synapse_user

CREATE DATABASE synapse
 ENCODING 'UTF8'
 LC_COLLATE='C'
 LC_CTYPE='C'
 template=template0
 OWNER synapse_user;

Then we need to allow the synapse_user to connect to the database. The Postgres docs for Synapse talk about possibly needing to enable password authentication, but I found that by default this was already endabled, so all I had to do was add the relevant file to the pg_hba.conf file. I wasn’t sure how to find the pg_hba.conf file on my system, but this Stack Overflow thread explained what commands I could use when logged into the Postgres database server.

show hba_file;

My pb_hba.conf file was located at /etc/postgresql/12/main/pg_hba.conf.

Towards the bottom of the file, I added the Synapse local connection section to allow the synapse user access to Postgres.

# Database administrative login by Unix domain socket
local   all             postgres                                peer

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# "local" is for Unix domain socket connections only
local   all             all                                     peer
# IPv4 local connections:
host    all             all             127.0.0.1/32            md5
# Synapse local connection:
host    synapse         synapse_user    ::1/128                 md5
# IPv6 local connections:
host    all             all             ::1/128                 md5
# Allow replication connections from localhost, by a user with the
# replication privilege.
local   replication     all                                     peer
host    replication     all             127.0.0.1/32            md5
host    replication     all             ::1/128                 md5

Because line order matters in pg_hba.conf, the Synapse Postgres docs make a point of the fact that the Synapse local connection needs to be just above the IPv6 local connections.

Migrating the SQLite data to the Postgres database

In order for the data to be migrated from the SQLite database to the PostgreSQL database, we need to use the synapse_port_db executable, which requires that the homeserver.yaml file includes the correct server_name. So edit the homeserver.yaml, and set the server_name to your domain name from part 1.

nano /etc/matrix-synapse/homeserver.yaml
server_name: "domain.com"

The next step is to make a copy of the homeserver.yaml file, in preparation for the Postgres set up

cp /etc/matrix-synapse/homeserver.yaml /etc/matrix-synapse/homeserver-postgres.yaml

Then, edit the new /etc/matrix-synapse/homeserver-postgres.yaml file, so that the database settings point to the newly created Postgres database.

#database:
#  name: sqlite3
#  args:
#    database: /var/lib/matrix-synapse/homeserver.db

database:
  name: psycopg2
  args:
    user: synapse_user
    password: dbpassword
    database: synapse
    host: localhost
    cp_min: 5
    cp_max: 10

Make sure that the newly created /etc/matrix-synapse/homeserver-postgres.yaml file is owned by the correct system user.

chown matrix-synapse:nogroup /etc/matrix-synapse/homeserver-postgres.yaml

Next step is to copy the SQLite database, so that we can import from the copy, and give that copy the correct permissions.

cp /var/lib/matrix-synapse/homeserver.db /var/lib/matrix-synapse/homeserver.db.snapshot
chown matrix-synapse:nogroup /var/lib/matrix-synapse/homeserver.db.snapshot

We should then stop the matrix-synapse server, before we run the import.

systemctl stop matrix-synapse

Now we can use the synapse_port_db command to import the data from SQLite to Postgres, using the SQLite snapshot, and the Postgres enabled homeserver-postgres.yaml.

synapse_port_db --curses --sqlite-database homeserver.db.snapshot --postgres-config /etc/matrix-synapse/homeserver-postgres.yaml

Once the import successfully completes, we can backup the current SQLite enabled configuration.

mv /etc/matrix-synapse/homeserver.yaml /etc/matrix-synapse/homeserver-old-sqlite.yaml

Finally, we set the Postgres enabled yaml as the default.

mv /etc/matrix-synapse/homeserver-postgres.yaml /etc/matrix-synapse/homeserver.yaml

At this stage, it’s a good idea to make sure that the new /etc/matrix-synapse/homeserver.yaml file has the right permissions set, and if not, set them correctly.

chown matrix-synapse:nogroup /etc/matrix-synapse/homeserver.yaml

And that’s it, log into your Matrix server via the web client or desktop client, and if you’ve completed the import correctly, everything should be working as before.

That concludes the main set up requirements to have a working Matrix homeserver.

I do however plan to have two follow-up posts. In the first, I’ll dive into some of the quirks of signing into and verifying additional clients, once your initial sign-in has been successful.

For the second, I’ll be adding some testers to the mix, and I’ll document and any configuration changes I’ve needed to make, to get things usable as a community communication platform. This one might take a bit longer, as it relies on other folks testing the platform, giving me feedback, and me tweaking the server until it’s a workable solution.

Categories
Freelancing Ubuntu

Setting Up a Matrix Server on Ubuntu 20.04 – Part 1

For about the past six months or so, I’ve been interested in the open, decentralized communications standard called Matrix. In May of this year, it was announced on TechCrunch that Automattic had invested $4.6 million into the company behind the Matrix standard. The company in question, New Vector (now rebranded as Element) also develop an open-source chat client, which is a rival to Slack, that runs on the Matrix standard. This is the part that made me sit up and take notice. Matt Mullenweg is the CEO of Automattic and the co-founder and BDFL for WordPress. His decision to invest in this open-source communications standard, which offers an alternative to Slack, could very well mean that in the near future, both the internal communication at Automattic and that of the WordPress project, could end up running on Matrix.

For some time now, myself and the other admins of the WP South Africa Slack community, have been considering switching away from Slack. The main reason for this is that Slack doesn’t offer an option for open source communities to get any premium plan features, so unless we’re prepared to pay $8 USD per person for our over 1000 members we’re stuck with the free version. It also means we don’t “own” our open source community’s communication channels. If Slack ever dropped the free plan, which could happen, we’d be stuck up the proverbial creek.

I decided it was time I took a look at what it would take to get a Matrix homeserver set up, what the pros and cons would be, and if it would be possible to migrate all our users over to our own homeserver.

Requirements

In order to set up a Matrix homeserver, I would need a web server instance to host it. Fortunately, I have a Digital Ocean account, so spinning up a new basic VPS droplet with Ubuntu 20.04, 1GB of RAM, 1 vCPU and 25GB of storage at $5 a month was a click of a few buttons. The other requirement was to have a public domain pointing to the IP address of the server. I asked Hugh, who manages the wpsouthafrica.org domain, to set up an A record in the DNS to point the subdomain matrix.wpsouthafrica.org to the new server IP address. You can also use a regular top-level domain (eg domain.com) but if you already have a domain (for example for your community website), using a subdomain for the Matrix server means not needing to purchase a new one.

Initial set up

A note on commands, I’m running all the commands for this server as the root user. If you have access to your web server via another user with sudo privileges, I suggest switching to the root user, it will just make everything easier.

sudo su

The first step with a new server is to make sure the server software is up to date.

apt update && apt upgrade

Then, I needed to ensure that any package dependencies for the matrix-synapse server software are installed.

apt install lsb-release wget apt-transport-https

While there are official matrix-synapse packages in the Ubuntu repositories, the matrix-synapse docs recommend using their own packages. To do that I first had to enable those packages, by adding the GPG key and the matrix-synapse repository for a Debian/Ubuntu-based system.

wget -qO /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/matrix-org-archive-keyring.gpg] https://packages.matrix.org/debian/ $(lsb_release -cs) main" | tee /etc/apt/sources.list.d/matrix-org.list

Once the repository is set up, I can get the latest package updates, which will now include the Matrix ones, and install the matrix-synapse homeserver software.

apt update
apt install matrix-synapse-py3

During the install, I enter the chosen domain (in my case our subdomain) as the server name. I can also choose not to send Anonymous Data Statistics, but that’s entirely up to you.

Once it’s all installed, I start the matrix-synapse server, and enable it to auto-start on system boot.

systemctl start matrix-synapse
systemctl enable matrix-synapse

Configure Matrix Synapse

Firstly, I needed to generate a random string which is used as the Matrix Synapse registration secret. This I did using the following command.

cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1

The server outputs the random key, which I copied and saved somewhere safe for later.

rgzzglba4KiJo5qIMcAUu3eEWsMtV8wu

No, this isn’t the key for our Matrix homeserver! 🙂

The next step is to edit the homeserver.yaml configuration file, which is in the /etc/matrix-synapse directory. For this tutorial, I’m using nano, but you can use whatever CLI text editor you prefer.

nano /etc/matrix-synapse/homeserver.yaml

I searched for and changed the registration_shared_secret entry, and used the randomly generated key created earlier. It’s important to note that the key should be enclosed by double-quotes.

registration_shared_secret: "YC9h8djIiCaCinWxkE2zt9cgvXYGX25P"

I then saved and closed the homeserver.yaml file.

Then, I restarted the matrix-synapse service, which will apply the new configuration.

systemctl restart matrix-synapse

Generate SSL

My next step was setting up the webserver software Nginx as a reverse proxy for the Matrix Synapse server. Before I could do that, I needed to generate an SSL certificate, to make sure the traffic to and from the server is secure. This can be accomplished using a LetsEncrypt certificate, for which I need to install certbot.

apt install certbot

Once certbot is installed, I generated the SSL certificate, using my email address, and the subdomain we have pointing to the IP address of the webserver.

certbot certonly --rsa-key-size 2048 --standalone --agree-tos --no-eff-email --email user@domain.com -d domain.com

Once this is completed, the SSL certificate and chain were saved at /etc/letsencrypt/live/domain.com/fullchain.pem and the SSL key file was been saved at /etc/letsencrypt/live/domain.com/privkey.pem. This information will become useful when I set up Nginx in the next step.

Setup Nginx as a Reverse Proxy

Setting up a reverse proxy allows Matrix clients to connect to your server securely through the default HTTPS port (443) without needing to run Synapse with root privileges. So first I install Nginx.

apt install nginx

Once installed, I create a virtual host file, to manage the incoming connections.

nano /etc/nginx/sites-available/matrix

The virtual host file configuration redirects all traffic from port 80 (HTTP) to port 443 (HTTPS), configures the SSL port with the certificates created earlier, redirects any requests to the /_matrix endpoint on the domain to the Matrix Synpse server, and configures port 8448, which is used by the Matrix Federation APIs to allow Matrix homeservers to communicate with each other.

server {
    listen 80;
    server_name domain.com;
    return 301 https://$host$request_uri;
}

server {
    listen 443 ssl;
    server_name domain.com;

    ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;

    location /_matrix {
        proxy_pass http://localhost:8008;
        proxy_set_header X-Forwarded-For $remote_addr;
        # Nginx by default only allows file uploads up to 1M in size
        # Increase client_max_body_size to match max_upload_size defined in homeserver.yaml
        client_max_body_size 10M;
    }
}

# This is used for Matrix Federation
# which is using default TCP port '8448'
server {
    listen 8448 ssl;
    server_name domain.com;

    ssl_certificate /etc/letsencrypt/live/domain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/domain.com/privkey.pem;

    location / {
        proxy_pass http://localhost:8008;
        proxy_set_header X-Forwarded-For $remote_addr;
    }
}

Once the file is saved and closed, I can enable the Nginx virtual host, and test to make sure there were no issues.

ln -s /etc/nginx/sites-available/matrix /etc/nginx/sites-enabled/
nginx -t

Finally, I restart Nginx, and enable it to start at system boot.

systemctl restart nginx
systemctl enable nginx

UFW Firewall

With the basics set up, it would now be a good idea to add some firewall rules, to ensure the rest of the server ports are locked down. I add the ssh, http, https, and the TCP port ‘8448’ to the UFW firewall using the command below.

for svc in ssh http https 8448
do
ufw allow $svc
done

After the rules are added, I enable the firewall and check the firewall rules, using these two commands.

ufw enable
ufw status numbered

At this point, I usually log into the server via SSH in a new terminal just to be sure I’ve not locked myself out of SSH access.

Test the Matrix Synapse homeserver

If everything is set up correctly, at this point you should be able to browse to the below URL, enter the server domain, and check if a successful call can be made to your homeserver.

https://federationtester.matrix.org/

If you see Success message, congratulations! You’ve successfully set up your Matrix Synapse server. However, there’s still more to be done, which we will dive into in part 2 of this tutorial.

Categories
Development Experiences General Ubuntu

Building A New AMD Powered Workstation

If there’s one hobby that I have that I don’t get to spend much time on, it’s building/upgrading PCs.

A few years ago I upgraded my 10-year-old workstation/gaming PC, to something a bit more modern. At the time I was working with a fairly limited budget, and so I had to make some concessions around what parts to purchase for the upgrade.

During the course of the 10 years since the original build, I had added a 128 GB SSD boot drive and included additional two additional 1 TB HDD storage drives. So when I upgraded I opted to merely improve the PC internals, namely the motherboard, CPU, memory, and graphics card. The plan was to use this as both a workstation PC as well as for gaming and keep my laptop for remote work/conferences. During the upgrade, I discovered that I had a spare 128GB M.2 SSD from my then laptop that I could use as a secondary boot drive. So I ended up with a dual boot Windows (for gaming) and Ubuntu (for work) machine.

I’ve been using this way successfully for the past two years, but over the course of the last year, a few things become clear to me.

Firstly, while the 128 GB M.2 SSD was nice and fast as a boot drive for Ubuntu, it wasn’t enough space to keep all my work-related files on, so I had to purchase an additional 1TB storage drive, move my work files there, and symlink them all up to my boot drive. This meant that indexing new projects in PHPStorm could often be painfully slow.

Secondly, Steam Play, and especially the work being done on the Proton tool, was getting REALLY good. It’s gotten to a point where most modern triple-A games run natively on Proton or require a few tweaks here and there to get running smoothly. Even Star Wars: Jedi Fallen Order (my favourite game of 2019), got to Gold level status on ProtonDB, even with the EA Origin account sign in required nonsense.

Thirdly, and a little selfishly, I wasn’t actually getting much time to game. Call me stupid, but as it turns out, the plan of having a gaming PC that would double as my workstation, while sounding like an amazing idea (gaming in my breaks during work hours, woohoo!) didn’t quite work out. In the two years since I upgraded the PC, the only game I actually managed to play through was the aforementioned Jedi Fallen Order, and that was only because I took the PC home and played in the evenings during my year-end leave.

With these realizations, I spent the latter part of 2019 and the rest of 2020 putting some money aside for a new build. The new PC would remain at the office, and the older, upgraded one would come home, giving me the ability to work and game at both locations. Over time I will probably only need to upgrade specific parts of the new machine to stay up to date, and then the parts they replace could be moved to the older one. By the time November rolled around, I’d saved enough to buy the parts for a modest mid-range build, with a decent upgrade path for future changes. Given that 2020 turned out to be the year it was, I decided I would like to end the year on a happy note.

A note on naming. I used to always call my workstation/gaming PC “Psyrig”, a portmanteau of my then online nick (Psykro) and the word “rig” (from the term gaming rig). As I got older I’ve dropped that name, and simply called it my workstation/gaming PC. Now that I have two, with different sets of parts, I’m going to have to think up some new names.

The parts

After much online research, I finally settled on the following parts

  • AMD Ryzen 5 3600 CPU
  • Asus TUF GAMING B550M-PLUS (WIFI) Motherboard
  • 16GB Corsair VENGEANCE LPX DDR 4 RAM
  • Samsung 970 EVO Plus 500GB NVMe SSD
  • Gigabyte GeForce GTX 1660 Ti OC 6GB Graphics Card
  • Cooler Master MWE GOLD 650W ATX PSU
  • Cooler Master Masterbox K500L ATX case

The motherboard was the most important part of the build. I wanted something that would be solid now, but have a decent upgrade path. The Asus board supports both the current-gen AMD Zen 2 CPUs, as well as the newly released Zen 3 chips, has PCIe 4.0 connectivity, supports the latest standards for external ports (USB 3.2 and USB Type-C) as well as having built-in WiFi and Bluetooth. So when the time comes to upgrade either the CPU or the Graphics Card (or both) this board should be able to handle it.

I really wanted to get an X570 based board, but the price was just to high for my budget, so the B550 would have to do.

The AMD CPU was the second most important part. I’d been eyeing the Zen2 Ryzen 5 3600 for a while, and it was a great little upgrade from my previous 2600x.

The third most important part was a decent-sized NVMe SSD, that I could use for my boot drive, as well as for storing my work-related files, instead of needing to offload them to a separate hard drive. This also meant I could keep one 1TB HDD with the old PC, for general storage.

When it came to the graphics card, I didn’t quite have enough in my budget to afford an RTX 2060, so I opted for a GTX 1660 Ti instead. Once the current shortage of the new graphics cards is over I’ll probably want to upgrade this to either an RTX 3060 Ti or the equivalent AMD 6000 series card.

I wanted to get DDR4 3200 Mhz RAM, but that was out of stock so I settled on DDR4 3000 Mhz RAM instead. To round out the build, I went with a Gold rated 650W power supply, that can handle any modest planned future upgrades, and the Masterbox case because it was the most understated, within my budget.

The goal of this build is to only ever need to upgrade the graphics card when the current one gets a bit out of date. The rest of the hardware should be pretty solid for at least a couple of years, and I can easily swap out anything that might cause bottlenecks down the line.

The build

I decided to stream the new build, instead of just taking before and after shots. I only ended up streaming the pre-build, where I made sure that all the parts were working, as trying to get a decent camera angle while I put the parts inside the case proved more difficult than I had anticipated.

Warning, content slightly NSFW

The completed build looks more or less how I wanted it to, simple, clean, fairly well cabled managed, and with all the RGB on the motherboard turned off. I still need to turn off the front fan RGB, but that’s only still on (I think) because I didn’t connect up the fan headers to the motherboard properly, so that’s a problem for another day.

Benchmarks

Hardinfo

I discovered Hardinfo when I wanted to benchmark my workstation against my laptop, for my Zenbook laptop review. While the benchmarks are related to the processing capabilities of the CPU, it was nice to see that all of those benchmarks were improved across the board in the new build. The only benchmarks that were worse were the FPU (Floating-point unit) benchmarks, which was interesting, but I have no idea what this means in the grand scheme of things.

BenchmarkWorkstation1LaptopWorkstation2
CPU Blowfish (lower is better)0.971.050.50
CPU CryptoHash (higher is better)1284.821058.341613.40
CPU Fibonacci (lower is better)0.550.390.28
CPU N-Queens (lower is better)5.124.804.40
CPU Zlib (higher is better)2.351.462.82
FPU FFT (lower is better)0.800.650.70
FPU Raytracing (lower is better)2.561.134.47
GPU Drawing (higher is better)16462.808428.9919499.27

UNIGINE

In my Windows benchmarking days, 3DMark would have been my go-to graphics benchmark tool. However, I wanted to test something on Ubuntu. After a bit of searching, I found UNIGINE, and installed and ran the Superposition benchmark, at the “1080p medium settings” configuration on both PCs.

The old machine had a score of 7355, with an average framerate of 55, while the new machine had a score of 11153 and an average framerate of 83.

For completeness, I also ran the benchmark on “1080p high settings” on the new build, and recorded a score of 8111, with an average framerate of 60. While I can’t compare this to the old build, as the 3GB VRAM on the graphics card can’t handle the high settings, it’s nice to know that I should be able to run most games at high settings going forward, or as a worst-case scenario, drop down to medium.

I’m very happy with this new build, and I hope I don’t have to upgrade anything major for at least a year. That being said, I am fully aware that the new AMD CPUs and GPUs, as well as the new nVidia GPUs, have just launched, so I have no idea how long things will last in their current state. I’m a bit of a sucker for new upgrades!

Categories
Development Experiences Freelancing Ubuntu

ASUS Zenbook 15 UX533FD review – an Ubuntu friendly developer laptop

I’ve always been a fan of Dell laptops. While often a little more pricey than their counterparts, their laptops are usually well built, typically run Ubuntu without any hassles, and Dell have great after sales service. My last two laptops where Dell.

I’d been eyeing the Dell XPS 15 for about a year, and I had planned to purchase one when I was due for an upgrade. However, sometime back in 2017, someone I follow on Twitter suggested something I had not thought of, the Asus Zenbook range.

Now as a PC gamer, Asus is a well known brand. The produce some of the best PC gaming hardware around. In recent years they’ve switched to more integrated hardware, from tablets (my first Android tablet was an Asus) and more recently to laptops. So I was curious to see how the Zenbook range compared to the Dell XPS range.

The first thing that struck me was the wide range of ZenBook Series laptops available. After some extensive research, I eventually settled on the Asus Zenbook 15 UX533FD.

General impressions

I’ve been using this laptop for almost a year now, and I’m very happy with it. It boots fast, runs all my applications without any problems, and is super light and easy to carry around. Whenever I’m working in it, it never gets hot, and I hardly hear the cooling fans spinning up, so it’s super quiet as well. It has a average of 10 hours battery life if I’m using it for development all day, and an added bonus is that it looks really good, with the Royal Blue finish. You can read more about the tech specs here, but it has everything I need in terms of connections. Finally, the charging cable is also small and light, so when it’s in my laptop backpack I hardly even notice it’s there.

Cost Effective

I always prefer to limit myself to a specific budget, this time around, no more then ZAR25 000. I also tend to have specific minimum hardware requirements when it comes to a work laptop, preferring a decent Intel Core i7 CPU, at least 16GB of RAM and a 512 GB SSD hard drive. I’m not too worried about getting the latest and greatest of the Intel chips, nor do I need a 4k or touch enabled screen. I’m also not concerned about super powerful graphics or the number of additional ports it has, I just need at least one or two USB ports and an HDMI port.

Based on these specifics the Asus Zenbook was the clear winner, being available within my budget at ZAR7000 less than a Dell XPS, configured with almost exactly the same hardware.

Ubuntu friendly

Whether it comes pre-installed with Windows Home or Pro does not really matter to me, as long as I can install Ubuntu on it without any problems. At first I had some issues with getting Ubuntu installed, but after a bit of research online I discovered that updating the laptop firmware to the latest version would resolve any issues. I also decided to install the most recent Ubuntu version, which usually contains the most recent kernel updates and therefore less hardware compatibility issues. The base OS install was a breeze and I didn’t need to jump through any hacky workarounds to get certain things working. I’ve since successfully upgraded the OS to the recent LTS release (20.04), again with very few issues.

Performance

I’ll be the first to admit that I know nothing about performance benchmarks, so all I did was find a benchmarking tool on Ubuntu that would give me some scores to compare. Hardinfo seemed to be a solid option, so I ran the benchmark suite on the laptop and compared it to my AMD Ryzen powered workstation.

I was pleased to discover that not only were many of the benchmarks pretty close, but a few of them were better on the laptop, mostly in CPU related tests. I honestly can’t say I can tell the difference when I’m working on my laptop vs the workstation, except when it comes to graphics intensive applications, like games.

BenchmarkWorkstationLaptop
CPU Blowfish (lower is better)0.971.05
CPU CryptoHash (higher is better)1284.821058.34
CPU Fibonacci (lower is better)0.550.39
CPU N-Queens (lower is better)5.124.80
CPU Zlib (higher is better)2.351.46
FPU FFT (lower is better)0.800.65
FPU Raytracing (lower is better)2.561.13
GPU Drawing (higher is better)16462.808428.99

I’ve since had quite a few online interactions with other developers who’ve also discovered the joy of the Zenbook range.

So if you’re looking for a small, powerful, good looking, well priced, Ubuntu friendly laptop, you won’t go wrong with an Asus Zenbook.

Categories
Development Ubuntu WordPress

Submitting a patch to WordPress core, using Git

I initially encountered version control in my 4th year of programming, when the lead developer of the company I worked at had implemented Subversion as a code backup solution on our local testing server. As we were all required to use Windows at the time, we mostly just installed TortioseSVN, so my command line Subversion knowledge is limited.

A few years later, at another company, I was introduced to Git. At the time GitHub was in it’s infancy, but we used it internally for revision control and production deployments. This was also around the time I switched to using Ubuntu as my main OS, and learning the joys of the terminal. Since then, every team I worked with has used Git and either Bitbucket, GitHub, or GitLab. This means that I can use Git on the command line with reasonable success, and can work with branching, code reviews, and submitting/merging pull requests using the web interfaces of these platforms, with GitHub probably being the one I am most used to.

Back to the point, this meant then when I got into developing for WordPress at the end of 2015, and found they all used Subversion to manage both core and plugin/theme repositories, I was a little out of my depth. Managing plugins was easier, I could keep the my version control using Git, and then just memorise a couple of commands to push the latest updates to the plugin repository. Getting into core development was a little trickier, as it was all managed using Subversion. To be honest, it’s been one of the reasons I’ve struggled to get into WordPress core development.

All this changed in March of this year, when it become possible to associate GitHub accounts with WordPress.org profiles and documentation was added to WordPress core developer handbook on how to use GitHub pull requests for code reviews related to trac tickets in WordPress.

I had the code, and the comfort in using Git, and the documentation was very clear as to the steps to follow. All I had to do was make it happen.

Setting up the development environment

I’ve blogged about this before, but I use a very simplified LAMP stack for my local development. I know, I’m old, but it’s easy to understand, fast, and it just works. It does help that my base OS is Ubuntu, and the actual LAMP set-up is really easy, especially if you have the amazing tutorials over at Digital Ocean.

What did concern me was how much work I might have to do to get the wordpress-develop GitHub repository installed and set up on my local environment, to be able to test anything I added. To my surprise, it was a breeze. Using the same setup script I use for my client projects, I created a wordpress-develop local site and cloned the wordpress-develop repository into the local wordpress-develop site directory.

The WordPress source code resides in the /src directory, so I browsed to the local .test domain which my setup script creates (https://wordpress-develop.test/src) and was surprised to see a little ‘reminder’ screen, telling me I needed to install the npm dependencies.

Fortunately I already had npm installed locally (having recently needed it for the Seriously Simple Podcasting player block) so I just needed to install and build the required assets with npm install and npm run dev.

After that, it was the good old famous 5 minute install I’m always used to, and within a few minutes I had the wordpress-develop code up and running on my local environment.

The actual Git process.

This is all documented in the core handbook entry on Git, but once you have the GitHub repo cloned, creating a patch to submit to a ticket is pretty straightforward.

Firstly, it’s a good idea to create a working branch for your code. The handbook recommends using the ticket number as part of your branch name

git checkout -b 30000-add-new-feature

Once you have your working branch, you can make your changes to the code, and use the git add and git commit commands you’re used to if you’re comfortable using Git. You’re able to view the status of your changes, using the diff tool. This outputs all the changes between the main branch, and your working branch.

git diff master 30000-add-new-feature

When you are ready to submit the patch, you can use the diff tool again, and simply pipe the changes to a .diff patch file.

git diff master 30000-add-new-features > 30000.diff

If there are already patches attached to the trac ticket, you can use this tool to create updated patches.

git diff master 30000-add-new-features > 30000.2.diff

To be honest, this is probably very similar to how it works in Subversion, but being able to use Git commands and Git branches, when you’re more comfortable with Git, makes it much easier to get started.

You can also use GitHub pull request for code reviews on patches, but that’s a topic for another day.

This is all pretty exciting for me, because it lowers the barrier for me to contribute directly to WordPress. I’ve already got two new tickets in trac that I’ve submitted patches to, and I’m looking forward to being able to contribute back to WordPress core in the coming years.

Categories
Development Experiences Freelancing Ubuntu

An experiment in dark vs light themes

It all started, effectively 2 years ago, with this tweet.

I’m not sure when I started following Brent on Twitter, but he posts interesting stuff about Laravel and PHP, and I’ve learned a bunch from his blog. Sometime last year he tweeted this and as I dug deeper into the conversation, I realised something. I had, up until a few years ago, been a strong proponent of light themes.

But somewhere along the line, I was tempted by the dark side.

via GIPHY

I can’t remember exactly when it happened, but it was recent, since about 2016 or so, that I switched my PHPStorm IDE from a light to a dark theme. I think it was when I first installed the Material Theme, and it defaulted to a dark theme. I’m not sure if I kept it because it was better, or because I was lead to believe it was better, but I’d been using it ever since.

Over the past weekend, I upgraded my workstation to the latest Ubuntu release and one of the new features of the OS is a built in dark theme, so I tried it, and hated it.

On Sunday (exactly 2 years on from the original tweet!) I was thinking about this, and I realised that there are only two places that have a dark theme; my terminal, and my IDE. Everything else I use daily has a light theme. My text editor, my Slack instance, even my browser, is using a light theme. And I got to thinking that every time I’ve tried a new dark theme (in Ubuntu, Slack, Twitter) I’ve hated it. So why am I keeping to a dark theme in the two applications I probably use the most, after Slack and my browser.

So I decided to try an experiment. I switched PHPStorm back to the IntelliJ light theme, I switched my terminal to a light theme (Tango light), and I gave it a day to see if it made a difference.

It’s now been two days like this, and I’m surprised to find that I have not noticed any negative differences. In fact, I’ve found the text in PHPStorm and my terminal more easy to read, and therefore it feels like I comprehend what’s happening quicker. As I’ve always kept my monitor brightness and contrast settings low, it appears I’ve actually been working in a sub optimal way now, for at least 3 years!

Light themes FTW (again).

Categories
Development Freelancing Ubuntu WordPress

Things I’ve been working on lately – part 1

Managesite scripts

Over the course of the past 4 years I’ve experimented with a bunch of different local development environments for my freelance client work. I started with Scotch Box, transitioned to Boss Box, and finally back to bare bones LAMP, mostly because I develop on Ubuntu and I find Apache2 to be an easier web server to configure than nginx. The final addition of mkcert (generating locally trusted SSL certificates) rounded up my local development environment requirements.

The only thing missing was an automated way to provision a new site. As I explain the mkcert article, spinning up a new site requires a few steps I have to follow each time.

  • create new Apache VirtualHost config files
  • create a new database
  • create a client directory in my local sites directory
  • add a record to my /etc/hosts file
  • create the SSL certs
  • restart the Apache2 webserver

And then I have to do the reverse when I want to delete a site.

So I decided to put these commands together in two sitesetup and sitedrop bash scripts.

To install these scripts to your local workstation, download the separate files, edit the HOME_USER, SSL_CERTS_DIRECTORY, and SITES_DIRECTORY variables at the top to match your local setup, make them executable, and copy them to your /usr/local/bin/ directory.

wget https://gist.githubusercontent.com/jonathanbossenger/2dc5d5a00e20d63bd84844af89b1bbb4/raw/889ec6da4e1727b63a383256172c65afb9da107e/sitesetup.sh
// edit the variables
sudo chmod +x sitesetup.sh
sudo cp sitesetup.sh /usr/local/bin/sitesetup
wget https://gist.githubusercontent.com/jonathanbossenger/4950e107b0004a8ee82aae8b123cce58/raw/8b1ceb8ca7bf17d04a15f274f1fccdd665e89dd0/sitedrop.sh
// edit the variables
sudo chmod +x sitedrop.sh
sudo cp sitesetup.sh /usr/local/bin/sitedrop

Once installed you can run either

sudo sitesetup sitename

to provision a new site or

sudo sitedrop sitename

to drop an existing site.

Next step will be to turn these into something that you can install quickly with one command, but that’s still a work in progress.

Categories
Experiences Freelancing Ubuntu

Additions and upgrades

It’s been just over two years since I moved into my current office space, and just over a year since I last wrote about it. As my two major hobbies outside of my work as a developer are jiu-jitsu (which not many folks can relate to), and computer hardware and peripheral upgrades (which most can at least understand) I thought it might be interesting to look back at what changes I’ve made in the tools I use every day, over the last year.

The Workstation

Last year I posted a review of how my workspace had changed over the 2 years since I left employment and become a freelance developer. It included this image of what my desk looks like, with a short run down of everything you see (and don’t see) in the picture.

My November 2018 Workspace
My November 2018 workspace

Today, just over a year later, there have been some small but important changes.

My November 2019 workspace

So besides the less bright lighting (more on that later), the more keen of eye among you will have noticed some subtle differences.

Firstly, my peripheral monitors have changed. I was able to pick up a newer Samsung and Dell 24 inch monitor, so now all three screens are LED powered, and the two side monitors are more uniform. I’ve definitely noticed the difference in the visual quality improvement in making sure all three monitors are LED, I no longer feel like I’m having to adjust my eyes when I switch between the different screens.

The other major changes are in peripherals, I’ve replaced the Logitech headseat with a Sennheiser HD 280 Pro studio set, and replaced the short mic stand with a proper desk mounted arm. This allows me better use of my screens and keyboard when I’m in online meetings or recording podcasts (which I’ve not done as much as I’d like to in 2019)

I also treated myself to a Sparkfox Atlas Wireless Bluetooth Controller. This is a great little gaming controller, which I can use connected via a USB cable or via Bluetooth, works on both Windows and Ubuntu and has a very similar layout to an XBox controller.

Finally, as I noted earlier, I’ve switched from using the oppressing overhead florescent lights to a smaller lamp which sits behind the left screen. It gives off just enough light for me to see the things I need, without making my eyes water during the day.

The Laptop

While my workstation has only undergone a few external changes over the course of the last year, I eventually decided it was time for a new laptop this year.

I usually try and use a laptop for at least 4-5 years before replacing it. I’d been mostly happy with the performance of the Dell Inspiron gaming laptop I’d purchased at the end of 2017, but I ended up doing more conferences and therefore travel in 2019 than I had done previously, and two things became clear. First, the laptop and bulky, ungainly charger became heavy when carrying them around a lot, and second, the battery only held around a 4 hour charge.

I’ve always been partial to Dells, and the Dell XPS 15 seemed the logical choice. However, an online friend had recently purchased an Asus Zenbook, and was raving about it.

After researching the differences in price and hardware between the two, and reading a bunch of online reviews, I ended up purchasing the Asus Zenbook UX533FD, and I couldn’t be happier.

It’s super light, has a 10 hour battery life, runs the latest Ubuntu (after a bios update), and is powerful enough that I feel as productive using the laptop as I do on the desktop. At the same time, I also upgraded my laptop mouse to the Logitech M720 Triathlon, the big brother of the Marathon mouse I use on my workstation, and it’s a great wireless addition to the laptop.

The bleeding edge of operating systems.

The other big change I made at the tail end of this year was upgrading both my laptop and workstation OS to the latest version of Ubuntu, 19.10, codenamed Eoan Ermine. I typically only run the latest LTS version on my workstation, and whatever is the latest release on my laptop, but the 19.10 release is so slick, stable and fast that I had to install it on the workstation. I was actually feeling left behind whenever I switched from laptop to PC.

The future

Looking ahead, I don’t think there will be much I will change in 2020, as I’ve pretty much got my perfect set up. But who knows, upgrading computers is the one hobby I do tend to like to spend money on, so I can’t make any promises.