Hackviking He killed Chuck Norris, he ruled dancing so he took up a new hobby…

17Mar/200

Unifi Security Gateway JSON config

The Unifi series from Ubuquiti has great features for centralized management of larger networks. There are however many things not supported in the Cloud Key UI that still can be configured. During the last deployment, we had two additional needs we couldn't accomplish from the Cloud Key itself.

  • Multiple WAN addresses - we needed to configure more than one fixed IP on the WAN interface.
  • IP-Sec SHA256 hash - one of our site-to-site VPN connections required SHA256 as the hash algorithm.

There are several guides on how to accomplish this but they are scattered all over the place. This is a complete writeup of how to accomplish this and provision the changes to the devices.

Continue reading...
16Mar/200

Making docker swarm redundant

A basic docker swarm comes with two options for storage bind or volume. Both are persistent storage bot only on that node. So if the node fails and the swarm starts the task on a new node the data is lost. There are a few options to mitigate this with storage plugins for redundant storage. For my small raspberry pi docker swarm, I will use replicated storage via GlusterFS and binds.

Continue reading...
3Mar/200

Rclone from CLI to GUI

Rclone is a powerful tool for syncing data to and from cloud storage. For everyday usability a graphical interface is nice. For the use case of encrypted offsite backup, a graphical user interface to access single files for restore makes it so much more usable. As an example, in this article, I will use the case of encrypted backup to Google drive. In that case, we encrypt all the data including filenames before uploading the data. That prevents us from browsing the backup on Google drive to retrieve a specific file that we need to restore. For this purpose, we could use the Rclone CLI but it will be much easier with a nice web UI.

Continue reading...
2Mar/200

Encrypted offsite backup

Having a good set and forget, but really set and double-check every now and then, strategy for your backups is important. Backups need to be automated to get done but also need to be tested to make sure that you can recover files when needed. This article will look at a home or small company setup doing large scale backups on a budget.

Continue reading...
27Feb/200

Unifi Controller Docker backup

Currently running my Unifi Controller as a docker container which works great. If you ever ended up with a broken Unifi Controller or Cloud Key you know the hassle it is to re-adopt and re-provision all your network gear to get back to square one. You should really keep track of the automatic backups from Unifi.

I'm using ryansch/unifi-rpi container which has more then 10 million pulls on docker hub. There isn't any information about handeling backups in the description which surprises me! It is however a pretty easy thing to setup properly. Since I'm running my Unifi Controller in docker on a Raspberry Pi Docker Swarm my biggest fear is to fry the SD card. If I fry the SD card on the docker node it will also fry the automatic backups the unifi controller writes to disk.

Continue reading...
26Feb/200

MySQL on Docker Swarm

For several applications around the house, I need a MySQL backend. The biggest database I run is about 60Mb of data for my Kodi media players. By using centralized storage like this I only need to update one of them when I add new media. Also convenient if I watch a movie in the living room, pause it, and then want to continue in the bedroom. A few years ago we actually did this with our apartment in San Francisco and our other apartment in Sweden. So this has been battle-proven over the years for me.

Continue reading...
25Feb/200

Raspberry Pi Docker Swarm

For small home server applications like Hassio, Plex and headless BitTorrent boxes Raspberry Pi has been a great solution for years. I came to a point where I was running several ones with different software on it depending on it's intended use. This, however, isn't ideal for a number of reasons. Even though Raspberry Pis are cheap you usually end up underutilizing the hardware. So we could be running more stuff on the same hardware. The second issue is the setup, I have done numerous posts about setting up different systems and how to maintain them.

Continue reading...
24Feb/200

reCAPTCHA v2 vs reCAPTCHA v3

CAPTCHA was first invented in 1997 to distinguish a human from a bot performing an action. Back in the day captchas were usually obscured or deformed letters. Before that, we had the simple question verification like "what is 1 + 9" which is simple enough once the bot scrapes it off the page.

In 2007 reCAPTCHA was originally launched and then acquired by Google in 2009. The new thing with reCAPTCHA is that the work effort used to prove you're a human isn't wasted. Initially, reCAPTCHA was used in the digitalization of the New York Times and Google Books archive. By presenting the user with images where the OCR had failed you got a human interpretation to add to the OCR results. By combining known and unknown images you could still be confirmed as a human while providing this service.

Continue reading...
10Feb/200

Dealing with credentials in PowerShell

Whenever you write PowerShell scripts that are going to be used for automation you need to secure your credentials. The best practice is to use a service account to execute the PowerShell script and delegate whatever privileges it needs to execute. When dealing with internal systems and resources that are usually pretty easy if they all authenticate from the same ecosystem or are integrated properly. But there is instances where you need to store credentials like when working with external APIs or deattached internal system.

Continue reading...
3Feb/200

Google Cloud Functions local testing – auto releod

Cloud Functions is Google Cloud’s event-driven serverless compute platform. Run your code locally or in the cloud without having to provision servers. Go from code to deploy with continuous delivery and monitoring tools. Cloud Functions scales up or down, so you pay only for compute resources you use. Easily create end-to-end complex development scenarios by connecting with existing Google Cloud or third-party services.

It's easy enough to get up and running with a local dev environment but with the basic setup you need to restart the dev server on every file change. Doing a lot of Google AppEngine development recently I got use to the dev server reloading changed files. When you have the basic environment up and running (specified here: https://cloud.google.com/functions/docs/functions-framework) just follow these simple steps.

Continue reading...