Hackviking He killed Chuck Norris, he ruled dancing so he took up a new hobby…

13Jun/161

WinSCP: SFTP – FTP over SSH

It was a while since I did a tool post and I realize that many people doesn't know about WinSCP or even SFTP, FTP over SSH. I use it all the time to quickly transfer files to Linux based boxes like Raspberry Pi or my Amazon Web Services VPS machines. As long as you have SSH access you can use WinSCP to transfer files. You can set it up to use sudo and make every part of the file system writable but I wouldn't recommend it, it's easy to make a mistake that destroys your system - especially if your working with remote systems. By default WinSCP, or other SFTP clients, end up in the logged in users home directory. If you then need the files anywhere else on the system you can use an SSH client, like Putty, to move the files to the correct location later.

winscp

5May/162

WD NAS: Enable FTPS

Sending unencrypted FTP across the internet is a bad idea! You send your credentials in plain text compromising access security as well as the data your sending. My book live duo has, as most NAS products, support for unencrypted FTP. Since it's based on vsftpd it's only a matter of configuration to make it a much more secure FTPS implementation instead. In this post I'm using my Western Digital My Book Live Duo but this is applicable to most Western Digital NAS products and many other brands as well.

Enable SSH

First of all we need to enable SSH to be able to get access more configuration options for the FTP service. By accessing http://{WD IP-address}/UI/ssh you will see a screen where you can enable SSH access and get the root password.

Enable SSH

After this we can connect to the Live Duo via SSH. I recommend that you change the root password the first thing you do, use the passwd command to accomplish this.

Create certificate

The My Book Live Duo, and probably most of the other models as well (since the share much of the firmware), already have openssl installed which we can use to create the certificate. First we need to create a folder for the certificates and generate them. I generate both 2048bit and 4096bit certificates since I want to test the performance difference (see below). You should not use the 1024bit key length since that has been proven to be weak and can be broken.

mkdir /etc/ssl/ftp
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /etc/ssl/ftp/vsftpd_2048.key -out /etc/ssl/ftp/vsftpd_2048.pem
openssl req -x509 -nodes -days 365 -newkey rsa:4096 -keyout /etc/ssl/ftp/vsftpd_4096.key -out /etc/ssl/ftp/vsftpd_4096.pem

You will be asked a bunch of questions about location and other stuff. You can more or less put in whatever you like since this is a self signed certificate it will never automatically be trusted by clients anyway so the information is pretty much irrelevant.

Configure FTP (vsftpd)

The My Book Live Duo already have an FTP service that you can enable from the UI. It use vsftpd which supports SSL and TLS, which we want to use for this, as long as OpenSSL is available on the box and apparently it is since we generated the certificates. First we make a copy of the original conf file for save keeping and then open it for editing.

cp /etc/vsftpd.conf /etc/vsftpd.conf.bak
nano /etc/vsftpd.conf

At the end of the file we add:

#SSL CONF
rsa_cert_file=/etc/ssl/ftp/vsftpd_2048.pem
rsa_private_key_file=/etc/ssl/ftp/vsftpd_2048.key

ssl_enable=YES
allow_anon_ssl=NO
force_local_data_ssl=YES
force_local_logins_ssl=YES

ssl_tlsv1=YES
ssl_sslv2=NO
ssl_sslv3=NO

require_ssl_reuse=NO
ssl_ciphers=HIGH

Then CTRL + O to save and then CTRL + X to exit nano. Then we restart the FTP service.

/etc/init.d/vsftpd restart

filezilla_ssl_warning

Now you can try it from Filezilla, or what ever client software you like that supports ftps. In Filezilla you will get this certificate warning where you can see the additional information you put in when you created the certificate.

Performance - 2048 vs 4096

First run with the configuration above gave me around 8.9MiB/s transfer speeds and the CPU of the Live Book Duo was around 89%. I change the certificates to the 4096bit ones, restart the service and try again. More or less got the same numbers with the higher encryption so the CPU is not the bottleneck for the throughput. At the same time I'm not running any other services besides the SMB shares on this device.

Make backup of the config file

cp /etc/vsftpd.conf* /shares/Backup/

The backup is good to have if a firmware update changes the config file back. I have tried to enable and disable the FTP service and that doesn't effect the configuration at least.

7Mar/160

Odroid: First setup

If you have installed Raspberry Pi in the past the Odroid will not be a problem. It a little more hands on and require a little more effort then the Raspberry Pi. Here is a quick guide how to get you Odroid up and running. In this example I use an old Odroid-C1 that I found in one of my drawers.

Download image

First you need to download an image for your SD-card (or eMMC module, which is supported by Odroid). The Hardkernal download page have all the available images. Then make use of SD-formatter and Win32DiskImager to get the image onto the SD-card. When you first boot you Odroid you will notice that it doesn't give HDMI output as appose to the Raspberry Pi. You will need to connect it to your network and make sure you have a router or similar running a DHCP server. Once you see the IP in the DHCP list you can go ahead and use putty to SSH into it. Default username is root and default password is odroid.

Initial config

First thing you want to do is to change the default root password by running passwd. Now you have a much more secure box then before. The Raspberry Pi ships with a config script that you can use to make your basic config, the Odroid doesn't. I did however find a great script called odroid-utility written by Mauro Ribeiro, which seems to be an employee of Hardkernal actually. The script is open sourced on Github and gives you more or less the same capabilities as the Raspberry Pi config script. Since it's not shipped we need to download it, make sure that your Odroid have an internet connection and run the following:

sudo -s
wget -O /usr/local/bin/odroid-utility.sh https://raw.githubusercontent.com/mdrjr/odroid-utility/master/odroid-utility.sh
chmod +x /usr/local/bin/odroid-utility.sh
odroid-utility.sh

odroid-utility

It gives you the ability two do basic configuration like HDMI, resize root partion and change the hostname. You can also update the kernel with this tool. Each time you run the script it updates it self so once you download it you can be sure you have the latest release.

Update your system

After all this just do a basic package upgrade and your Odroid is ready for use.

sudo apt-get update
sudo apt-get upgrade

Timezone

This one is really important for some implementations since correct date and time can break some setups.

sudo dpkg-reconfigure tzdata
15Feb/160

AWS EC2 Linux: Simple backup script

I have a small EC2 box running a couple of WordPress sites. To back them up I wrote a quick bash script that dumps out the databases and also zips up all the files. This script is running daily, the amount of disc space doesn't really matter since the sites are really small. If you have larger sites you might want to split the script into one for weekly backup for files and a daily one for databases, the principal are the same.

Prerequisites

  • Credentials for MySQL with access to the database in question. In this example I run with root.
  • A folder where you want to store your backups. You should a separate folder since we clean out old backups in the script.
  • Path to the www root folder for the sites.

Script

First we create a script and open it for editing in nano:

nano backup.sh

First line in any bash script is:

#! /bin/bash

Then we create a variable to keep the current date:

_now=$(date +"%m_%d_%Y")

Then we dump out the databases to .sql files:

mysqldump --user root --password=lamedemopassword wp_site_1 > /var/www/html/backup/wp_site_1_$_now.sql
mysqldump --user root --password=lamedemopassword wp_site_2 > /var/www/html/backup/wp_site_2_$_now.sql

Here we use the $_now variable we declared in the beginning of the script. So we can easily find the correct backup if we need to do a restore. Next step is to zip up the www contents of each site:

tar -zcvf /var/www/html/backup/www_backup_site1_$_now.tar.gz /var/www/html/wp_site_1
tar -zcvf /var/www/html/backup/www_backup_site2_$_now.tar.gz /var/www/html/wp_site_2

Once again we use the $_now variable to mark the file name with the current date for the backup.

Then we want to clean out backups older then x days. In this example I remove all backup files older then 7 days.

find /var/www/html/backup/* -mtime +7 -exec rm {} \;

The first part find /var/www/html/backup/* -mtime +7  then we use the -exec to pipe the result into a command, in this case rm. The {} inserts the files found and the \ escapes the command to prevent it to expand the shell. Then we finalize the row with a ;. So this means that for each file found it will execute the rm command and remove that file.

Save the backup.sh file and exit nano. Now we need to make the script executable:

chmod 755 backup.sh

Then we can do a test run of the script:

sudo ./backup.sh

Now check the folder that the files was created and contain the actual data. If your satisfied with the result you can move the script into the cron.daily folder.

sudo mv /home/ec2-user/backup.sh /etc/cron.daily/

Now the daily cronjob will create a backup of the WordPress sites. Both files and databases.

11Feb/160

AWS EC2 Linux: Enable SSH password logon

Amazon AWS EC2 instances are by default secured with ssh certificates. This is great for security until you need to provide a UAT duplicate for an external user or developer. You don't want to share your certificate with them and setting up a new one is more work than this quick fix. The security isn't as important on a UAT or test system as it is on a production system so for this use case we can go for lower security.

To enable users to access we first need to set a password on the ec2-user account. It's highly recommended that you select a strong password!

sudo passwd ec2-user

Then we need to allow password only connections. We edit the ssh settings, find the line PasswordAuthentication no and change it to PasswordAuthentication yes.

sudo nano /etc/ssh/sshd_config

Then we need to restart the ssh service.

sudo service sshd restart

Now you can login to you Amazon AWS EC2 instance with only a password. To secure the server again just change the PasswordAuthentication line back to no.

15Sep/155

Raspbian: fstab doesn’t mount NFS on boot

Ran out of disc space in one of my Raspberry Pi projects last night. Of course I did a quick and dirty install with NOOBs so cloning to a larger SD-card felt like a drag. So I decided it was time to upgrade from a 4GB SD to a 16GB SD as well as the latest version  4.1.6+. Installation went like a charm until I went to edit my /ect/fstab. I added the same NFS line as I used before:

192.168.0.5:/nfs/Download /mnt/download nfs rsize=8192,wsize=8192,timeo=14,intr 0 0

sudo mount -a work just fine but the share wasn't mounted after reboot. Googled the issue and found a lot of different suggestions, many related to USB drives. The number one suggestion was adding rootdelay=10 or rootdelay=5 to /boot/cmdline.txt. That would probably solve the issue for USB drives because the system are unable to identify the drive that early in the boot. Same suggestion was given for NFS failures as well but will not work. Tried a lot of suggestions, even found scripts to run mount -a after boot. That is not a solution just a work around!

Suggestion for adding x-systemd.automount,noauto to the mount options failed as well. Tried a lot of different configurations with one thing in common, no error in /var/log/syslog.

Finally I realized that the network was not ready! I checked the /etc/network/interfaces settings for eth0.

iface eth0 inet manual

It will still get a DHCP address but that will happen later in the boot process. So when the fstab entries are processed there is no network connection and therefore the disc will not mount. So if you change it to:

iface eth0 inet dhcp

Then the NFS drive will mount just fine after a reboot.

4Feb/150

Amazon EC2 Linux – Add additional volumes

EBS Mappings

Adding additional storage to your Amazon EC2 instance have several advantages. You can select the right storage type for the use. Why use a fast SSD backed volume for storing nightly backups instead of magnetic storage, that ar slower but come at a much lower price.

First you need to provision storage and assign it to your instance. Amazon provides a good guide on how to add additional volumes to your instances. There are several advantages to using several different volumes. As I wrote in my guide to move mysql storage you will not risk running the boot disk full witch will make the system halt. Other advantages include the selection of storage fit for your purpose and price range, as mentioned above. External volumes can also easily be migrated between instances if and when you get a need for that. It is also easier when you need to extend your storage space. Instead of making a snapshot of the entire instance and then launching a new one with a bigger drive you can attach new storage and migrate the data. This approach will make the downtime much shorter.

When selecting the correct storage for you solution there are a few things to keep in mind. When it comes to EBS it comes in three basic flavors. All with there benefits and disadvantages, it is there for important to make an educated decision.
Continue reading...

4Feb/150

Move MySQL database storage location

It's always a good idea to keep storage away from the boot device. If you run out of space on the boot device the system will halt. If you make a new install it's easy enough to move your storage and you can do it from a cloud-init script like this:

- mkdir /var/db
- chown -R mysql:mysql /var/db
- sed -i 's:datadir=/var/lib/mysql:datadir=/var/db:g' /etc/my.cnf
- service mysqld start

If the installation is all ready up and running you have to add steps for stopping the MySQL server and copy the database files:

mkdir /var/www/db
service mysqld stop
mv /var/lib/mysql/* /var/db
chown -R mysql:mysql /var/db
sed -i 's:datadir=/var/lib/mysql:datadir=/var/db:g' /etc/my.cnf
service mysqld start

In these examples I have user /var/db where I mounted the second storage device. You can however use any location you see fit. Points of interest in the command sequence.

chown -R mysql:mysql /var/db

Make sure that the mysql deamon have access to the storage location.

sed -i 's:datadir=/var/lib/mysql:datadir=/var/db:g' /etc/my.cnf

sed is a simple tool for search and replace inside text/config files directly from the command line. Here it searches for the line specifying the MySQL datadir location and replaces it with the new value.

4Sep/145

Amazon AWS EC2 Linux Swapfile

The Amazon EC2 Linux instances comes without swap. Sooner or later this will be a problem with service hangups or crashes as a result because you run out of memory. I found a lot of instructions on the web about how to add a swap file but no one takes the storage into concern and you may end up paying a lot for a very little performance gain. This article will guide you through swap files on Amazon EC2 linux hosts.

Continue reading...