Hackviking He killed Chuck Norris, he ruled dancing so he took up a new hobby…

24Aug/160

Reset Windows 10 password

Upgraded one of my laptops to Windows 10 and immediately locked the admin account. Googled and found a bunch of suggestions using the Windows 10 install CD? As most other people I upgraded via the Windows 10 upgrade notice that was bugging me for months. So how do you get back into a Windows 10 machine you locked your self out of?

Before the upgrade I decrypted my boot disk and uninstalled the old Truecrypt install I had on there so accessing the disk wasn't an issue. If you have full disk encryption enabled you will not be able to use this method.

Prepair Hiren's BootCD & Boot

Hiren's BootCD contains a miniXP version that is perfect for this. Download it and follow the instructions in Launching Hiren's BootCD from USB Flash Drive. They have a really good step by step guide there. Once that is all done restart your computer from the USB drive and select "Mini Windows XP".

Prepare for password reset

Once you are booted up locate your windows boot drive. In this example I will use E:\ as the Windows boot drive. Locate the following file:

E:\Windows\System32\utilman.exe

Rename it to:

E:\Windows\System32\utilman.exe.bak

Then make a copy of:

E:\Windows\System32\cmd.exe

And rename it to:

E:\Windows\System32\utilman.exe

You can also do this via the command prompt like this:

move d:\windows\system32\utilman.exe d:\windows\system32\utilman.exe.bak
copy d:\windows\system32\cmd.exe d:\windows\system32\utilman.exe

Then reboot your computer and let it start Windows 10.

Change the password

Once at the login screen press CTRL+ALT+DEL and click the icon for the "Utility Manager" in the lower right hand corner. This should launch a command prompt with admin rights. Just type in the following commands:

net user <username> /add
net localgroup administrators <username> /add

This will add a new account to the local admin group. Then close the command prompt and login with the new account, the password will be blank.

Clean up

Delete the C:\Windows\System32\utilman.exe and rename the utilman.exe.bak back to utilman.exe

10Aug/166

Unable to delete file: System cannot find the file specified

Running hybrid systems spanning from windows to different flavors of Linux sometimes present you with interesting behavior. One that I have faced every now and then is files that you can't delete due to special characters in the filename. They do show up in the file explorer but when you try to delete them you get "Item not find" or similar error. Seen a lot of different solutions online with third party software and other complex solutions but there is two simple "built-in" ways to deal with this in windows.

dir /x method

Open up a cmd window and navigate to the folder in question. Run a simple dir /x command and it will list the files with the non-8dot3 short names. Then you can just go del {non-8.3-filename} and you will get rid of the file.

rd /s "\\?\c:\temp" method

Not all files generate the non-8dot3 name for some reason, don't ask me why - didn't dig that deep. For this there is a solution as well. In this scenario make sure that the files you want to get rid of are the only one/ones in the directory and run rd /s "\\?\C:\folder\containing\problem\file". This command will remove all the files and the directory as well.

20May/160

BtSync: Refuses to connect to any peers

Have a few ARM based nodes running BitTorrent Sync (btsync) and needed to re-install one of them. Trying to remove it I ended up with my main node (owner) for all my folders to stop connecting to peers or accepting incoming connections. Took me a while to figure out a solution and I couldn't find much about it on the forums or when I googled so I thought I'll share this quick story.

Background

This applies, in my case at least, to the distribution installed via apt-get from YeaSoft. After reading a forum thread about how to remove old and abandoned peers I decided to set the peer_expiration_days setting to 0 to clean the old peer out. So I used the dpkg-reconfigure btsync command and set it to 0. The old peers where cleaned out so I went back to revert the config back to it's original. In the "wizard" it stated that leaving it blank would render the default value of 7 days. So I removed the 0 and saved the configuration. This might have been a mistake on my part but the configuration tool actually seems broken in this distribution.

Error

After doing that I could not set the value via dpkg-reconfigure btsync to anything else and no peers could connect or where contacted. Right after recycling the daemon they showed up for a few seconds and then disconnected. Since I'm running the free, unlicensed version, I can't switch owner of the folders so I had to get this online again. Changing config files didn't matter since they were, as stated in them, overwritten every time the daemon started again.

Solution

Finally I downloaded the latest version from the getsync.com website and unpacked it in the temp folder. Looking at the command line used in the /etc/init.d/btsync script I could find what config file it used. So I started the latest version, which have support for "power user options" in the UI, with the same config file parameter. Went in to the UI and changed the peer_expiration_days back to it's original value, there even is a reset value link. Then shot down the process and started the original daemon with init.d and order where restored.

Tagged as: No Comments
12Mar/1611

SSL Error: Cannot verify server identity

Phone browsers have less trusted root and intermediate certificates than many desktop browsers. This can make your https site look good on the web but fail on mobile devices. Errors like "unable to verify the identity of the server" and others along those lines can show up. This is because the certification chain  can not be verified. Doesn't matter what supplier of SSL certificates you use they all end up in a few root certificates that are shipped with browsers and operating system as trusted certificates.

Many certificate re-sellers have their root certificates further down that chain than others. If the chain can't be traced back to a trusted certificate the warnings will show up. That will not effect the actual encryption of your website, self signed certificates for example still encrypts the traffic, but it will look bad. People can interpret that as a security risk, like a man in the middle attack, or as just low quality.

In this example I have setup a website on a Apache server with a certificate bought from GoDaddy. I haven't installed the intermediate  certificate. Any desktop browser can follow the chain, since it has a different set of trusted certificates, but the iPhone or Android devices can not since they don't have this certificate. There is a hole in the chain between our website certificate and the trusted one that the device have. By plugging that hole with a valid certificate that our certificate references and in turn references the trusted certificate that the device have we can complete the chain and get rid of the problem.

As mentioned above this example uses a Apache web server running on Linux and a GoDaddy certificate. The procedure will be different with other web servers and certificate suppliers but the principal is the same. When your certificate is delivered always check if there is intermediate certificates included.

So in the zip file that your GoDaddy certificate comes in there is a file named dg_bundle-g2-g1.crt, this is the certificate that your web site certificate is derived from and sits between that and the trusted certificate higher up in the chain.

So on my Apache server I bring up the file /etc/httpd/conf.d/vhost.conf

<VirtualHost 172.30.31.95:80>
    ServerAdmin webmaster@somesite.com
    DocumentRoot /var/www/html/somesite.com
    ServerName www.somesite.com
    ServerAlias somesite.com
    ErrorLog logs/somesite.com-error_log
    CustomLog logs/somesite.com-access_log common
</VirtualHost>
<VirtualHost 172.30.31.95:443>
    ServerAdmin webmaster@somesite.com
    DocumentRoot /var/www/html/somesite.com
    ServerName www.somesite.com
    ServerAlias somesite.com
    ErrorLog logs/somesite.com_ssl-error_log
    CustomLog logs/somesite.com_ssl-access_log common
    SSLEngine on
    SSLCertificateFile /etc/pki/tls/certs/somesite.com.pem
    SSLCertificateKeyFile /etc/pki/tls/certs/somesite.com.key
</VirtualHost>

As you can see we have two ports open, standard port 80 for http and https on port 443. The 443 have certificate along with it's private key configured. Upload the intermediate certificate to the server and copy it into the same folder (/etc/pki/tls/certs) as the other certificate files. Make sure that the apache server have access to it.

sudo chown -R root:www /var/www

Then add the bundle file in the ssl config in vhost.conf by adding this line just below the SSLCertificateKeyFile line.

SSLCertificateChainFile /etc/pki/tls/certs/gd_bundle-g2-g1.crt

Restart Apache

sudo service httpd restart

Now the certificate chain can be completed on the other devices as well and the error/warning will be gone!

9Feb/161

Use UNC path in Filezilla server

Filezilla is widely used for ftp servers, it's open source and easy to setup. It also support SSL encrypted FTP connections which is nice for data security. In one of my setups the need for sharing UNC paths came up. Filezilla actually supports it even though the UI doesn't. So in a few easy steps we can set it up. Remember that the service account running the Filezilla service needs access to the share.

  1. Setup the account in the UI. Point the directory to "c:\temp" or similar.
  2. Open up the "FileZilla Server.xml" located in the Filezilla install directory.
  3. Find the corresponding user node "<User Name="{username}">
  4. Under "<Permissions>" you will have an entry for each folder setup. Change the <Permission Dir="C:\temp"> to <Permission Dir="\\server\share">
  5. Recycle the Filezilla service and you are good to go!

 

You can then change permissions from the UI if you like. So this work around is just needed for creating the link to the share. Once again the FileZilla service account need access to the share. Running it under "System" or "Network Service" will not work in most cases!

4Feb/150

Move MySQL database storage location

It's always a good idea to keep storage away from the boot device. If you run out of space on the boot device the system will halt. If you make a new install it's easy enough to move your storage and you can do it from a cloud-init script like this:

- mkdir /var/db
- chown -R mysql:mysql /var/db
- sed -i 's:datadir=/var/lib/mysql:datadir=/var/db:g' /etc/my.cnf
- service mysqld start

If the installation is all ready up and running you have to add steps for stopping the MySQL server and copy the database files:

mkdir /var/www/db
service mysqld stop
mv /var/lib/mysql/* /var/db
chown -R mysql:mysql /var/db
sed -i 's:datadir=/var/lib/mysql:datadir=/var/db:g' /etc/my.cnf
service mysqld start

In these examples I have user /var/db where I mounted the second storage device. You can however use any location you see fit. Points of interest in the command sequence.

chown -R mysql:mysql /var/db

Make sure that the mysql deamon have access to the storage location.

sed -i 's:datadir=/var/lib/mysql:datadir=/var/db:g' /etc/my.cnf

sed is a simple tool for search and replace inside text/config files directly from the command line. Here it searches for the line specifying the MySQL datadir location and replaces it with the new value.

3Feb/150

Unattended use of mysql_secure_installation

After installing MySQL on any Linux distribution you run the mysql_secure_installation script, or at least you should! It will prompt you to set a new root password, remove anon access and a few other things. But if you want this configuration to be done in a deployment or cloud-init script? The mysql_secure_installation command/script doesn't accept any parameters, so it can't be used for unattended install. How ever you can execute the same commands via the mysql command line tool as long as the service is started.

mysql -e "UPDATE mysql.user SET Password=PASSWORD('{input_password_here}') WHERE User='root';"
mysql -e "DELETE FROM mysql.user WHERE User='root' AND Host NOT IN ('localhost', '127.0.0.1', '::1');"
mysql -e "DELETE FROM mysql.user WHERE User='';"
mysql -e "DROP DATABASE test;"
mysql -e "FLUSH PRIVILEGES;"

I use this to provision new MySQL servers in the Amazon EC2 environment and it works like a charm. If this is used in a cloud-init script make sure to execute the sudo service mysqld start first!

22Oct/140

Free Team Foundation Server in the cloud

During my professional career as a developer most of the time I have been using Team Foundation Server (TFS) for source control. Back in the day I even used Source Safe, stone age history for most people. For my private project or small startup projects "the files on disk with occasional zip backups" approach has been way to common. I have also used different GIT solutions as well as Google Code. It works fine but when you are use to TFS it's not as easy as you are use to. All the mayor cloud suppliers want to flirt with the startup community by offering free services that will keep the startups close when they grow bigger. We have seen several examples of this from Microsoft in the past, like BizSpark. Now they offer free Team Foundation Server in the cloud called Team Foundation Service or Visual Studio Online. The basic account is free for up to five users with unlimited repositories. Support for both TFS and GIT repositories!

So far I have added two of my current projects and the performance is really good! There is also many ways to extend the service with your own code and REST APIs. You can also use free resources for builds, load testing and more. If you require more resources they can be purchased on a pay for what you use approach. If your project grows you can add additional team members for $20/month.

20Oct/140

How to change from IDE, ATA or RAID to AHCI

I decided to break the RAID1 on my Dell M6500 so I could run Microsoft Server 2012 R2 along with my Windows 7 installation. When the RAID was deleted I thought it would be best to switch my SATA controller over to AHCI since I'm running two Corsair Force GT SSD drives. After changing to AHCI the computer blue screens during boot. I have done it several times before but not often enough to remember what needs to be enabled. This behavior is documented in Microsoft KB922976 (Error message occurs after you change the SATA mode of the boot drive) with automatic registry fix and all. However this is not the complete solution for all situations.

According to the KB you need to enable loading of the AHCI driver, a no brainier! And also enable the Intel AHCI controller driver. But what is not included in the KB article is that the ATAPI driver also needs to be enabled for it to work. If you try to change from ATA to AHCI it is already enabled, if your computer booted with the ATA setting.

So according to the KB you should set these two registry keys to "0":

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\Msahci\Start
HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\IastorV\Start

But you should also check that this one is set to "0":

HKEY_LOCAL_MACHINE\System\CurrentControlSet\Services\atapi\Start

You can also run these commands instead:

REG ADD HKLM\System\CurrentControlSet\Services\msahci /v Start /d 0 /f /t REG_DWORD
REG ADD HKLM\System\CurrentControlSet\Services\IastorV /v Start /d 0 /f /t REG_DWORD
REG ADD HKLM\System\CurrentControlSet\Services\atapi /v Start /d 0 /f /t REG_DWORD

Now your computer will start without the blue screen!

7Jul/140

Microsoft SQL Server Performance Basics (I/O Performance)

There are a lot of settings that you can tweak to get higher performance out of your Microsoft SQL Server. The most basic one is IO performance, i.e. disk performance. Usually when I talk to people about this I get the response that this is an art form and something that most techs don’t know about or feel that they don’t understand. Most people rely on the SAN team to take care of this but if you don’t understand this and can inform the SAN team what you need you will get the standard. Most SAN system are optimized for  There are always more tweaks that can be applied but in most cases the further you come along this line the smaller impact the changes have. In this article I would like to point out the most basic, and important, performance issues with Microsoft SQL Server that are easy to address. These are independent of size of the solution or underlying hardware e.g. local attached discs or SAN.

Background

To understand why this is so important you need to know a little about how Microsoft SQL Server reads from the disk. To simplify Microsoft SQL Server reads pages, pages contains a number of rows with you corresponding data. The pages with extents are 64kb in size. So the goal here is to read (or write) the page with as few disc IO’s as possible.

Stripe Unit Size

The stripe size is the smallest chunk of data that can be addressed within the RAID. So make sure you are using at least 64KB stripe size. If it’s a larger number like 128KB or 256KB that only means that you can write several more pages in the same stripe, this can actually benefit performance of the read ahead function in Microsoft SQL Server.

File allocation unit size / Disc cluster size

This setting is on the file system level. Microsoft SQL Server is designed for the NTFS file system and the default NTFS disc cluster size is 4KB. Again this should be 64KB for best performance, it enables SQL server to do less IO than a smaller cluster size does. There is a correlation between cluster size and stripe unit size that needs to be meet for optimal performance:

Stripe Unit Size ÷ File Allocation Unit Size = an integer

If possible you should try to meet this formula. However that isn’t always possible due to different storage systems. The most important thing for performance in that case is to use the 64KB cluster size! The formula for partition alignment below is however not optional for performance!

Partition alignment (partition offset)

When I have been talking to people about this most people look at me like I’m crazy. A system that was setup from a clean install of Microsoft Windows Server 2008 and later doesn’t suffer from this, these versions do an automatic alignment of the partition. If the partition isn’t aligned your server will end up splitting the read and write IO into two or more IO’s. This is very bad for performance.

Role of thumb here is:

Partition Offset ÷ Stripe Unit Size = an integer

Old systems prior to Microsoft Windows Server 2008 could end up with a 31.5KB offset (63 hidden sectors * 512b sectors). Doesn’t matter what stripe unit size you have 4,8,16,32,64,128…. It will never make the equation spit out an integer! Therefor bad for performance!

So if your system is prior to Microsoft Windows Server 2008 or have disk partitions created by an earlier version, check the partition offset! It’s easily done by running this command:

wmic partition get BlockSize, StartingOffset, Name, Index

To check the stripe size you have to refer to your storage controller. Standard offset in Microsoft Windows Server 2008 and later is 1024KB and it doesn’t really matter what stripe unit size you have, you will still end up with an integer.

Log files

For SQL server log files you should use RAID 1 both for best read/write performance but also for the extra data security. In a raid one you can lose 50% of your disks without losing data, neither RAID 5 or RAID 10 can guaranty this data safety. It will however cost you half of the storage space.

Do you want to read more?
http://technet.microsoft.com/en-us/library/dd758814(v=sql.100).aspx
Written by Jimmy May, Denny Lee and goes deeper into the techniques.