Hackviking.com He killed Chuck Norris, he ruled dancing so he took up a new hobby…

7Jul/140

Microsoft SQL Server Performance Basics (I/O Performance)

There are a lot of settings that you can tweak to get higher performance out of your Microsoft SQL Server. The most basic one is IO performance, i.e. disk performance. Usually when I talk to people about this I get the response that this is an art form and something that most techs don’t know about or feel that they don’t understand. Most people rely on the SAN team to take care of this but if you don’t understand this and can inform the SAN team what you need you will get the standard. Most SAN system are optimized for  There are always more tweaks that can be applied but in most cases the further you come along this line the smaller impact the changes have. In this article I would like to point out the most basic, and important, performance issues with Microsoft SQL Server that are easy to address. These are independent of size of the solution or underlying hardware e.g. local attached discs or SAN.

Background

To understand why this is so important you need to know a little about how Microsoft SQL Server reads from the disk. To simplify Microsoft SQL Server reads pages, pages contains a number of rows with you corresponding data. The pages with extents are 64kb in size. So the goal here is to read (or write) the page with as few disc IO’s as possible.

Stripe Unit Size

The stripe size is the smallest chunk of data that can be addressed within the RAID. So make sure you are using at least 64KB stripe size. If it’s a larger number like 128KB or 256KB that only means that you can write several more pages in the same stripe, this can actually benefit performance of the read ahead function in Microsoft SQL Server.

File allocation unit size / Disc cluster size

This setting is on the file system level. Microsoft SQL Server is designed for the NTFS file system and the default NTFS disc cluster size is 4KB. Again this should be 64KB for best performance, it enables SQL server to do less IO than a smaller cluster size does. There is a correlation between cluster size and stripe unit size that needs to be meet for optimal performance:

Stripe Unit Size ÷ File Allocation Unit Size = an integer

If possible you should try to meet this formula. However that isn’t always possible due to different storage systems. The most important thing for performance in that case is to use the 64KB cluster size! The formula for partition alignment below is however not optional for performance!

Partition alignment (partition offset)

When I have been talking to people about this most people look at me like I’m crazy. A system that was setup from a clean install of Microsoft Windows Server 2008 and later doesn’t suffer from this, these versions do an automatic alignment of the partition. If the partition isn’t aligned your server will end up splitting the read and write IO into two or more IO’s. This is very bad for performance.

Role of thumb here is:

Partition Offset ÷ Stripe Unit Size = an integer

Old systems prior to Microsoft Windows Server 2008 could end up with a 31.5KB offset (63 hidden sectors * 512b sectors). Doesn’t matter what stripe unit size you have 4,8,16,32,64,128…. It will never make the equation spit out an integer! Therefor bad for performance!

So if your system is prior to Microsoft Windows Server 2008 or have disk partitions created by an earlier version, check the partition offset! It’s easily done by running this command:

wmic partition get BlockSize, StartingOffset, Name, Index

To check the stripe size you have to refer to your storage controller. Standard offset in Microsoft Windows Server 2008 and later is 1024KB and it doesn’t really matter what stripe unit size you have, you will still end up with an integer.

Log files

For SQL server log files you should use RAID 1 both for best read/write performance but also for the extra data security. In a raid one you can lose 50% of your disks without losing data, neither RAID 5 or RAID 10 can guaranty this data safety. It will however cost you half of the storage space.

Do you want to read more?
http://technet.microsoft.com/en-us/library/dd758814(v=sql.100).aspx
Written by Jimmy May, Denny Lee and goes deeper into the techniques.

 

 

 

 

30Jun/140

Migrate WordPress to Google App Engine

Wordpress on Google App Engine

Why not run WordPress on Google App Engine? You will get performance and stability while only paying for the resources you actually use. Reading the official Google tutorial "Running WordPress in App Engine" it gives you a fare idea what you are in for. But if you want to migrate a currently running site then you have to do some tweaking. So here is a run down on how to do it!

    1. Download all data from current site

      Download all files in the public html directory from you site. You also have to obtain a copy of your database. The way you do this depends on what you current ISP setup. Please note that comments and other content added after this DB download will not be avalible in the new installation. Therefore it’s recommended that you take a fresh backup when you see that everything works as it’s supposed to.

    2. Setup local environment

      To be able to do this you need four components installed on your local machine.
      Python – Even though your developing on PHP python is required for Google App Engine Launcher to work.
      PHP – Download and install the latest version.
      Google App Engine SDK – Includes all you need to develop, test and deploy to Google App Engine.
      MySQL – For database support in the devenviorment.

    3. Setup Google App Engine Launcher

      In the Google App Engine Launcher "File" menu select “Create new application...
      Application name” – Will be the Google App Engine identifier when you upload, don’t worry this can be changed later.
      "Parent Directory" - Where you will store the files localy.
      Runtime” – Select PHP.
      Delete the “main.php” file that is created in the folder.
      Copy all the contents you downloaded from your public html directory.

    4. Setup “app.yaml

      The "app.yaml" contains the configuration information for Google App Engine. The first line contains "application:", you will notice that this is the same as the "Application name" you selected in the previous step. Important to remember here is that this will be part of you web address if you don't use a custom domain e.g. example.com, so in that case you will end up with applicationname.appspot.com.

      Next few rows is nothing you need to worry about: version, runtime, api_version and threadsafe you can leave as they are.

      The handlers: on the other hand is really important to get the WordPress site working on Google App Engine. You should use this:

      handlers:
      - url: /(.*\.(htm|html|css|js))$
      static_files: \1
      upload: .*\.(htm|html|css|js)$
      application_readable: true

      - url: /wp-content/(.*\.(ico|jpg|png|gif))$
      static_files: wp-content/\1
      upload: wp-content/.*\.(ico|jpg|png|gif)$
      application_readable: true

      - url: /(.*\.(ico|jpg|png|gif))$
      static_files: \1
      upload: .*\.(ico|jpg|png|gif)$

      - url: /wp-admin/(.+)
      script: wp-admin/\1
      secure: always

      - url: /wp-admin/
      script: wp-admin/index.php
      secure: always

      - url: /wp-login.php
      script: wp-login.php
      secure: always

      - url: /wp-cron.php
      script: wp-cron.php
      login: admin

      - url: /xmlrpc.php
      script: xmlrpc.php

      - url: /wp-(.+).php
      script: wp-\1.php

      - url: /(.+)?/?
      script: index.php

    5. Setup “cron.yaml

      This file is for the cron jobs. This is the same as on Linux/Unix machines and replaces the timer jobs WordPress have on other platforms. This file should contain:

      cron:
      - description: wordpress cron tasks
      url: /wp-cron.php
      schedule: every 2 hours

    6. Setup local database

      So after installation of the MySQL software start the MySQL Workbench. (There maybe some initial configuration wizards here, I already had it installed when I wrote the tutorial!)
      Double click the "Local instance MySQL..."
      If "Server Status" is "Stopped" then go to the "Server" menu and select "Startup/Shutdown" then click the "Start Server" button.
      When server is "Running" click the "Data Import/Restore" in the left hand "Navigator" menu.
      Select the "Import from Self-Contained File" radio button then browse for the database backup/dump you downloaded in step one.
      Under "Default Schema to be Imported To" click the "New..." button.
      Name the schema something you will remember.
      Click the "Start Import" button.
      When the database is successfully imported hit the little refresh icon next to "Schemas" in the left hand "Navigator" menu, now you should see your new database (or schema as it's called in MySQL).
      Double click on you new database and then on "Tables", find the "wp_options" table then right click and select "Select Rows - Limit 1000".
      Find the row where the field “option_name” is “active_plugins” and delete it! (This is to disable all active plugins before we upload)
      Find the rows where "option_name" is “siteurl” and “home” and set them to “http://localhost:portnumber”. The port number should correspond to the one in the applications "port" column in the Google App Engine Launcher application list.

      NOTE: You can also run this query against the server to delete the "active_plugins" row:

      DELETE FROM wp_options WHERE option_name = 'active_plugins';

    7. Configure “wp-config.php”

      Now you need to setup WordPress to access your database. This information is different between your local environment and the Google App Engine environment. I simply use an if statement to take care of that problem and save me the trouble of changing the wp-config.php file every time I update my site.

      if(isset($_SERVER['SERVER_SOFTWARE']) && strpos($_SERVER['SERVER_SOFTWARE'],'Google App Engine') !== false) {

      /** Google App Engine Settings */
      define('DB_NAME', 'hackviking_com');
      define('DB_USER', 'root');
      define('DB_PASSWORD', '');
      define('DB_HOST', ':/cloudsql/wp-hackviking-com:db');

      }
      else
      {

      /** Local Settings */
      define('DB_NAME', 'hackviking_com');
      define('DB_USER', 'root');
      define('DB_PASSWORD', 'mylocaldbpassword');
      define('DB_HOST', 'localhost');

      }

      DB_NAME - Name of the database. Locally it's the schema name you selected in step 6 when you imported your database. For the name used on Google App Engine please see step 9.
      DB_USER - Usually it's root locally if you haven't customized your setup. On Google App Engine it's always root.
      DB_PASSWORD - Password for the corresponding DB_USER. On Google App Engine the password for root is always null due to access control being controlled on app level.
      DB_HOST - On local environment it's "localhost". On Google App Engine please see step 9.

    8. Prepare local site for upload

      Run the site and login to the WordPress admin site e.g. "http://localhost:<portnumber>/wp-admin".
      Run all available updates, you will not be able to do this when the site is uploaded. To install plugins and updates you have to do it locally and then reupload. Therefor it’s a good idea to keep your local environment to be able to take care about this later on.
      Delete all WordPress plugins that will not work on Google App Engine, it’s not easy to know before you try but a role of thumb is that caching and minifying plugins won’t work.
      Install WordPress plugins required for Google App Engine.
      Google App Engine for WordPress
      Memcache Object Cache
      Batcache

    9. Setup the "Google Cloud Platform"

      1. Log in and create a new project by clicking "Create Project". If the "Project ID" is available that will automatically be the name of your Google App Engine instance and that name is part of your web address e.g. <projectID>.appspot.com. NOTE: This can not be changed later, if your not happy with your url you have to create a new project!
      2. To be able to run "Cloud SQL" needed for the MySQL implementation in WordPress you have to enable billing for the project. In the projects overview click your new project go to "Billing & Settings" and click "Enable Billing". Fill out all the details for the billing and verify your payment option.
      3. Now you can create the database for your system. In your project view select "Storage" ->"Cloud SQL" -> "New Instance".
      "Instance ID" - The name of your database. The name will be <projectID>:<InstanceID>.
      "Tier" - Start out with "D0 - 128M RAM" and upgrade later if you need.
      "Billing plan" - I prefer the "Per Use" when I start out.
      "Preferred Location" - The logical choice is to select "Follow App Engine App".
      "Assign IP Address" - Select "Assign IP Address", we need this later to upload our database! NOTE: DON'T FORGET TO DELETE THIS WHEN YOUR DONE OR IT WILL COST YOU!
      "Authorized IP Address" - Your public IP address that you will use to access the database later. If you don't know visit whatsmyip.org
      Click "Confirm" and the database will be set up. Go to "Access Control" and use the "Set root password" to set a password for the root account.
      4. Setup file storage for file uploads.
      Before you can set this up you need the "Service Account Name" for your Google App Engine instance. Easiest way to access this is to select "Compute" -> "App Engine" -> "Quota details" which will send you the the Google App Engine dashboard. Then select "Application Settings" under "Administration" and copy the "Service Account Name".
      From the project view go to "Storage" -> "Cloud Storage" -> "Storage Browser" -> "New Bucket" and select a name for it. I like to use the same name as my "Project ID".
      Then tick the box next to your new bucket and click "Bucket Permissions".
      Under "Add another permission" select "User" in the drop down, enter the "Service Account Name" and select "Writer" as permission and then click "Add"

    10. Upload database

      So now you have to get you database online. Start by creating a database backup of your local database.
      If you didn't authorize your IP address in step 9 follow these instructions to do so:
      1. Login to your Google Cloud Consol.
      2. Select your project name.
      3. Click  "Storage" -> “Cloud SQL”.
      4. Select your instance name.
      5. Click “access control”.
      6. Click “request an IPaddress”.
      7. Click “add authorized network” and enter your public IP address. If you don’t know visit whatsmyip.org.
      8. Click “set root password” and set it to something you remember.
      9. Connect with MySQL workbench to the database instance you created in step 9 and import your database backup to a new schema.
      10. Go into the database and set the “siteurl” and “home” to <projectID>.appspot.com.
      11. If the row “active_plugins” exists delete it!
      NOTE: Don’t forget to disable the external access after you’re done! For security reasons and cost. It costs every hour it’s open.

    11. Final WordPress config & deployment

      Now you have to go back to step 7 and put the new configuration into the "wp-config.php" file.
      In step 4 we talked about app.yaml, now you need to update that file! The line saying application: should be updated with you project ID from step 9.
      When this is done hit the "Deploy" button in Google App Engine Launcher to upload you site.

    12. Config site online

      Go to <projectID>.appspot.com/wp-admin and login with your admin account.
      Enable plugins:
      Google App Engine for WordPress
      Memcache
      Batcache
      When this is done go to "Settings" -> "App Engine" under "Upload Settings" you have "Bucket name" fill in the one you setup in step 9. This is for storing uploaded media files.
      Then enable your plugins one by one and test that the site is working. If you encounter issues with this just enable external DB access again and remove the “active_plugins” row from the database.

Trouble shooting

If you encounter issues with WordPress caching still enabled then try the following
Delete “advanced-cache.php
Delete “wp-cache-config.php
Delete “cache” folder from “wp-content” and “wp-includes
In “wp-config.php” set “WP_CACHE” to “false

 

Hope you liked this tutorial, if so please share it with others! Please tell me what you think in the comments below and give my any suggestions/requests you have! Stay tuned for more articles about WordPress and App Engine.

19Jun/140

Facebook offline

This morning Facebook seemed to be offline! The worlds largest social network was offline worldwide. So far reports has come from the U.K, USA and Sweden witch indicates that this was a world wide error. Currently the site seems to be available for most users and no official information has been posted.

Facebook Offline

Filed under: Facebook No Comments
18Jun/140

Exchange – List all e-mail addresses on domain

Listing all e-mail addresses for a domain on an Exchange server. It's pretty easy from Powershell but it took me a while to figure out so I thought I would share it.

get-recipient | where {$_.emailaddresses -match “<domain>”} | fl name,emailaddresses >> c:\addresses.txt

18Jun/140

Hyper-V – accessing info about physical machine from guest VM

More then once I have run into the problem of not knowing on witch physical server a Hyper-V hosted server is running. Getting RDP access to a clients system and then needing help from SAN or Network teams, there first question: "What physical machine is the host running on?". Without access to the physical servers or System Manager it's hard to know. You could probably figure it out from assigned IP-addresses but there is a quicker way if the Hyper-V Integration Services are installed.

This registry key: HKLM\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters

It contains the following information:

  • HostName
  • PhysicalHostName
  • PhysicalHostNameFullyQualified
  • VirtualMachineName
  • And also additional info about server version etc.

You can also run this from the CMD:
reg query "HKLM\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters" /v PhysicalHostName

If you have remote access:
reg query "\\<machine>\HKLM\SOFTWARE\Microsoft\Virtual Machine\Guest\Parameters" /v PhysicalHostName

Ofcourse you can replace the "PhysicalHostName" with any of the above values!

1May/140

Moved site to Google App Engine

Got the last invoice from my hosting company. Wasn't about to pay them good money for another year of services that I'm not satisfied with! So I moved the blog to Google App Engine instead. There is still a few things that needs to be sorted but for the most part it works fine. I will publish a guide for migrating existing WordPress sites to Google App Engine shortly. Followed the official documentation from Google but found a few pitfalls if you migrate existing sites instead of setting up a new one.

This was supposed to be online ages ago but I just moved to a new apartment and got a new job so I had a lot on my plate. But soon I will be back with a lot of exciting projects I been working on.

1Feb/140

Python: Remove querystring from URL

Needed to clean an URL from it's querystring in Python. Found a lot of examples telling me to use urlparse and then put all the bits and pieces back together. An easier and more efficient  way is this:

url = 'http://www.hackviking.com/?var=value'
url = url[:url.find('?')]

url now reads 'http://www.hackviking.com/'

16Jan/140

.Net C#: Webbrowser control print line break

I was updating an old project for a client today. I needed to print a simple list and figured that the easiest way should be to format the data in HTML. Most developers out there has built webpages now days and we all know that it is a quick way to format the information and print it. So after a quick creation of the HTML i found an article named "Displaying custom HTML in WebBrowser control" by Gunnar Peipman. So easy enough to print the custom HTML from the control, based on Gunnar's article I came up with this code:

WebBrowser webPrint = new WebBrowser();
webPrint.Navigate("about:blank");

if (webPrint.Document != null)
{
webPrint.Document.Write(string.Empty);
}

webPrint.DocumentText = printHTML.ToString();
webPrint.DocumentCompleted += webPrint_DocumentCompleted;
void webPrint_DocumentCompleted(object sender, WebBrowserDocumentCompletedEventArgs e)
{
WebBrowser webPrint = (WebBrowser)sender;
webPrint.ShowPrintDialog();
}

If you don't use the "DocumentComplete" event handler your printout will be blank. In the print I used a basic HTML table to show the data with rows like this:

<table cellpadding="3" style="width:100%">
<tr style="color:White;background-color:Black;font-weight:bold;"><td>Box ID</td><td>Gång</td><td>Hylla</td><td>Nivå</td><td>Nåbar från golvet</td></tr>
<tr><td>10346</td><td>AB</td><td>3</td><td>F</td><td>True</td></tr>
<tr><td>10294</td><td>AB</td><td>3</td><td>D</td><td>True</td></tr>
<tr><td>10296</td><td>AB</td><td>3</td><td>C</td><td>True</td></tr>
<tr><td>10298</td><td>AB</td><td>3</td><td>C</td><td>True</td></tr>
<tr><td>10300</td><td>AB</td><td>3</td><td>B</td><td>True</td></tr>
<tr><td>10302</td><td>AB</td><td>3</td><td>B</td><td>True</td></tr>

When printing data that took up several pages I ended up with this:

pagebreak error

The data was cut of at the line breaks. I Googled a lot and found a few suggestions about css "page-break.." but they didn't have any effect on the webbrowser control print layout. Finally I found a simple soulition, just add a bland column to the left of the table and the line break works. Like this:

<table cellpadding="3" style="width:100%">
<tr style="color:White;background-color:Black;font-weight:bold;"><td> </td><td>Box ID</td><td>Gång</td><td>Hylla</td><td>Nivå</td><td>Nåbar från golvet</td></tr>
<tr><td> </td><td>10346</td><td>AB</td><td>3</td><td>F</td><td>True</td></tr>
<tr><td> </td><td>10294</td><td>AB</td><td>3</td><td>D</td><td>True</td></tr>
<tr><td> </td><td>10296</td><td>AB</td><td>3</td><td>C</td><td>True</td></tr>
<tr><td> </td><td>10298</td><td>AB</td><td>3</td><td>C</td><td>True</td></tr>
<tr><td> </td><td>10300</td><td>AB</td><td>3</td><td>B</td><td>True</td></tr>
<tr><td> </td><td>10302</td><td>AB</td><td>3</td><td>B</td><td>True</td></tr>
<tr><td> </td><td>10382</td><td>AB</td><td>4</td><td>F</td><td>True</td></tr>

Then the printout looked like this:

pagebreak fix

30Oct/130

Google Picasa API Python: Developers Guide

Before you ask... Yes I struggled with the title of this post, but give me a break it's 4:30 am here! :)

I'm currently developing in python for Google App Engine and are using the Picasa API. I found this great guide from Google with exampels: https://developers.google.com/picasa-web/docs/1.0/developers_guide_python It's far from complete but the examples pushes me in the right direction and then some testing, trial and error gives me a chance to learn even more! So i thought I would share one of the things that got me in to trouble this morning.

I was trying to add a new album to the feed and got this error:

(404, 'Not Found', 'Unknown user.')

Even though the user is authenticated via oauth. But then i realized that this was the same issue I head before, the users e-mail isn't in the mix. I needed it for layout purposes before, to display profiles and so on, you can read about that here: Python: Get user info after oauth All the request I have done before, getting feeds of albums, pictures and so on, has been uri input from me in the code. This function just looks like this:

gd_client.InsertAlbum(title='Test', summary='a test album')

Where the gd_client is an instance of gdata.photos.service.PhotosService() with the oauth token all-ready specified using the code example from the official Google guide and spiced with the functions from my previous mentioned post. But still it doesn't work! So I realized that even though it is authorized it has no idea who the user is trying to insert the new album. According to the feed documentation you should do a post like this:

POST https://picasaweb.google.com/data/feed/api/user/userID

<entry xmlns='http://www.w3.org/2005/Atom'
xmlns:media='http://search.yahoo.com/mrss/'
xmlns:gphoto='http://schemas.google.com/photos/2007'>
<title type='text'>Trip To Italy</title>
<summary type='text'>This was the recent trip I took to Italy.</summary>
<gphoto:location>Italy</gphoto:location>
<gphoto:access>public</gphoto:access>
<gphoto:timestamp>1152255600000</gphoto:timestamp>
<media:group>
<media:keywords>italy, vacation</media:keywords>
</media:group>
<category scheme='http://schemas.google.com/g/2005#kind'
term='http://schemas.google.com/photos/2007#album'></category>
</entry>

So here is what stuff goes wrong. Looking what the gd_client is actually doing is posting to https://picasaweb.google.com/data/feed/api/user/. There is a really simple soloution for this, if we return to the oauth example from the guide:

gd_client = gdata.photos.service.PhotosService()
gd_client.auth_token = authsub_token
gd_client.UpgradeToSessionToken()

This is a cut out where you get the "use once" token back and upgrade it to a session token. Here you can do two things to make the gd_client work with the oauth user as a default all the time, either:

gd_client = gdata.photos.service.PhotoService(email='default')

or

gd_client.email = 'default'

Then it all will work just fine!

30Oct/130

Google Code Project Home Page: Tips & Tricks

When I was updating the home page for Picasa Web Downloader on Google Code I found two things that might interest others! :)

Paypal Donation Buttons

I found more written on the subject then about any other issue on Google Code markup. I have to say that there is a really easy way! Just create a donation button on paypal, copy the url for the image, and use the email donation link to create a common <a> in the markup like this:

<a href="https://www.paypal.com/cgi-bin/webscr?cmd=_s-xclick&hosted_button_id=XFGNUAPH5YR8C"><img src="https://www.paypalobjects.com/en_US/SE/i/btn/btn_donateCC_LG.gif" /></a>

Just be shore to close the img tag like /> otherwise it will mess up the markup.

Download Links

Google stopped the function of download links because of misuse. I found a blog entry written by one of there techs suggesting using Google Drive instead. In what way can you misuse free shared storage on Google Code that you cant misuse on Google Drive? Any way, I put up the link but when someone clicks the link for the latest release they get a view of the zip file contents on Google Drive instead. There is an easy fix for this! The link you get when you share it looks like this:

https://drive.google.com/file/d/0B7xgtMzrLFNNTE1XUUtjNXJsYVU/edit?usp=sharing

Just take the ID part (0B7xgtMzrLFNNTE1XUUtjNXJsYVU) and put it in this line below:

https://docs.google.com/uc?export=download&id=YourIndividualID

That will send the user directly to the download!

 

Filed under: Google Code No Comments