Get Started with Amazon EC2, Run Your .NET MVC3 (Razor) Site in the Cloud with Linux Mono

I’ve recently been getting pretty excited about Amazon EC2 once I realized you can setup a micro linux server for free to learn and test out their services. I know, a couple years behind the curve right? I’ve just never really looked into it. Now that I’m familiarizing myself with the whole cloud concept with Amazon, it’s really cool to think of small business growth potential in the web market without the restraint of building a heavy infrastructure upfront. I realize after reading some other articles that in the long run, it is probably more expensive than hosting your own solution. But as a startup, it still seems to me that it’s a pretty good deal at least for short term events or just taking their infrastructure out for a spin to test some ideas. Pay for what you use; that’s Amazon’s big pitch with their Amazon Web Services (AWS).

So if you want to play around with AWS for free, you have to roll Linux. If you want to host ASP.NET MVC3 on AWS Linux, there’s a few steps you need to take. For this post, I’ll talk about some of the AWS services that can get you started along with how to install and configure your environment for hosting ASP.NET MVC3 applications with Linux and Mono.

Disclaimer:

Before you try any of this, carefully read all the terms and conditions regarding Amazon Web Services. Review their pricing structure for every service you plan to use and the qualifying restrictions for the free tier. While their pricing structure is very competitive, it’s also very complicated. Be sure to fully understand what services you enable before doing it. Payment information is required to setup your AWS account, but you will only be charged when you use services that are not free. I am not responsible for any fees you may incur while using AWS services.

 

Conventions Used in This Article

Intended Audience

I’m writing this post for people who are mostly familiar with Windows environments and maybe have some experience with Linux. I may over-explain a few concepts like using SSH or tab completion, which mainly target the Windows folks who may have never seen this before. Otherwise, this post can be used as a reference guide to AWS and using Mono Web hosting in general. We will be using the default MVC 3 application as our sample, which might be interesting if you haven’t worked with it yet. If you’ve never worked with Mono web hosting before, always start with something simple and familiar. I wouldn’t recommend trying to take your massive, enterprise, e-commerce solution and running it straight up on Mono without expecting a few hiccups.  The MoMA tool will be useful for migrating assemblies since it identifies functionality not yet supported by Mono in your assemblies.

Keystrokes

There are a few conventions used throughout this tutorial. I’ll indicate keystrokes with {} characters, like in the example: {Ctrl} for a Control keystroke. I’ll hyphenate key combos like {Ctrl-D} for Control and D pressed simultaneously. And I’ll separate them for consecutive keystrokes: {Ctrl}{D} for Control then D.

Tab Completion

Another useful tip I’ll sometimes mention is when using the bash shell, you can type a segment of a directory or file name and press {TAB} to complete it. It will complete to the next unique segment. So if there are two files with similar names, it will complete the statement for the common portions of the names and wait for input. You can add the next unique character and press {TAB} again to make it go further. This is very useful when entering long paths or filenames from the command line. For example to script named install_mono-2.10.5.sh you can simply type:

./in{TAB}

Wait for the completion, then if it’s correct, {ENTER} to execute it.  Sometimes you’ll have to add a few extra characters to help it out.  Any path or git branch argument (and probably many other things) in Linux use this functionality.

Linux Command Line

As always with Linux command line, if you want to know how a specific command works, you can always check its manual pages by entering: “man <command>”   For example to see the manual for ssh, you can enter “man ssh”.  To exit the manual, just hit {Q}.

 

Amazon Web Services

Amazon Web Services (AWS) is a lot like an Ad-hoc enterprise infrastructure. You pick and choose the components you want and put them together in the arrangement of your choice. Every service comes with a cost, so you weigh how you design your infrastructure based on your needs. The main two services we’ll use to get started include EC2 (Elastic Cloud Computing) using EBS (Elastic Block Storage) for its local file system and S3 Storage Service as our static content storage (and delivery).

Elastic Cloud Computing (EC2)

EC2 is simply server that and provide any service we want. It’s computing on demand and you pay by the hour. For smaller environments (like this walkthrough), we’ll fire up one micro instance for both database and web server; while in larger environments, you might setup a pool of database servers, a pool of application servers with horizontal scaling in mind. With EC2 you have the option of using the built-in instance storage (usually about 160GB for paid servers) or using the Elastic Block Storage (EBS) service. EBS allows for a persistent (shared) storage for your instances. You can clone them into new servers or snapshot instances on existing servers.

Amazon Simple Storage Service (S3) & CloudFront

The S3 service provides centralized storage for all your applications. S3 is accessible from outside the Amazon network and the price is pretty competitive starting roughly around $0.14 USD/GB/month. Using this service is not required, but if you have an application with a lot of local assets, this is a pretty good option to keep them on the AWS network. You can also expose your files in S3 to the public essentially turning it into a static file web server. If you need a more responsive edge network, then S3 will serve as a data source for the CloudFront service and deploy your assets to edge locations around the world.

 

Tools

In order to use AWS on Linux, you’ll need an interactive SSH client and some common Linux utilities like tar, gzip, etc. For those of you Linux/Mac users, these tools are likely already setup on your machines. However, those of you Windows folks (like me) will need to install a few things.

MsysGit comes with all these utilities nicely packaged. Msys is a set of recompiled GNU utilities for Windows. It enables a nearly identical bash command line shell for Windows as Linux, and they run natively. It also comes with openSSH and SCP which come in extremely useful when connecting to servers or moving files around between Windows and Linux.

Putty is another great interactive SSH utility, and FileZilla is a great graphical file transfer client, which works with SSH. I commonly bounce between all these tools in my environments. If you want to use Putty with your AWS server, you’ll also need the PuttyGen utility available on Putty’s website. This will create a private key that wraps your openSSH based key for use with Putty and FileZilla.

Finally, if you want to use AWS services programmatically, you of course will need to download the AWS SDK.  Mac/Linux users can scroll to the bottom of the screen and download the “DLLs and Samples” version that is packaged in a ZIP for Mono.

Required Windows tools for this walk-through:

  • MsysGit installer  - During install, make sure you enable the Windows Explorer shell context menu for Git Bash Here.

 

Setup EC2 Instance

Before you get started setting up a new EC2 instance, you need to register an AWS account. This will require a credit card. If you’re a new customer or comply with their terms and conditions for the free tier, then you don’t need to worry about being charged fees for usage. Registration takes a little while. Eventually, you’ll receive an email stating your account is active and ready to go.

Launch New Instance

Sign into your AWS Management Console and enter the EC2 section.  When creating a new EC2 instance, we’re first going to find the Amazon Machine Image (AMI) that we want to use for our new server. Canonical provides a few pre-built ones for different versions of Ubuntu and they occasionally release updated AMI’s as updates are released.  I chose to use the most recent (at the time of this post) Lucid (10.04 LTS) build.

To get started, click Launch New Instance in the top left. First, choose the Community AMI’s tab and then search for ami-61be7908.  Also, watch for the star that indicates "free tier” server. Make sure you choose a free tier AMI!

Step through the setup wizard for your new AMI. I used mostly defaults to ensure the free tier shown here. Click each image for an enlarged screenshot.

 

The important thing here is that we’re choosing a micro (free tier instance). Starting out, I would open up just your web ports 80 and 22. When you finish the wizard, your instance will automatically start up.

During step three, you will create a new key pair if you haven’t already created one. You will need to download the key file to your local machine and place it somewhere easily accessible for you to use from a command line. (In my case, I saved to the ~/.ssh folder which translates to C:\Users\<username>\.ssh in Windows 7.

Setup an Elastic IP

Before we connect to the new server, we also need to setup an Elastic IP.  Elastic IPs act similar to static IPs except when you’re done using them, Amazon will release it back into the pool for someone else to use.  This will allow you to shutdown your server or reboot your server and still maintain the same IP. Using this service is free as long as you have a running instance associated with it; otherwise, Amazon will charge for its use to discourage wasting unused IP allocations on their network.

Since we have a new EC2 instance available and running, go ahead and allocate an Elastic IP. To do this, make sure you’re still in the EC2 section of AWS dashboard and click the Elastic IPs link under the left navigation labeled: Network & Security.  Click the button on the to, “Allocation New Address”. Select the EC2 option and click “Yes, Allocate.”  You will now have a dedicated IP in your list. Select the row and then click “Associate Address.” Choose the instance we just setup, and you’re done! One caveat to take note of is that shutting down the server will disassociate the address. So when you boot it back up, you will need to re-associate the address.  Rebooting the server will not disassociate the address.

Setup elastic IP step 1

Setup elastic IP step 2

 Setup elastic IP step 3

 

Connect to Your Server

Now that you’ve setup your new EC2 instance and you’ve configured your Elastic IP to route to your instance, you can now make a connection to your new server.

NOTE: Typically you can setup a DNS host record to resolve to your Elastic IP simplifying communication with your server. Naturally, this is what you would do when setting it up as a web server.  In this case since we’re just experimenting, you can setup an easy unknown name like test.yourdomain.com just so you don’t have to continually refer to your IP address in command line.

SSH in the Linux world is like the Swiss army knife for admins.  You can use SSH to connect to an interactive terminal, transfer files, or even tunnel traffic through its connection. It’s also encrypted with a public/private key pair (typically RSA). When you created the key on Amazon, you essentially generated the public/private key pair and you downloaded the public key.  SSH will use that public to authenticate your connection against a pre-setup user account on the server.

Have your key file handy. Earlier I suggested you place this in your home .ssh directory ~/.ssh or on Windows 7, C:\Users\<username>\.ssh\.  Right click on any directory with Windows Explorer and open an Git Bash. Then go to your .ssh directory by entering: “cd ~/.ssh”.  This should place you in the same directory that we stored the key file for AWS making it very easy to refer to during this walkthrough. In the sample below, we called this file “mykey.pem”. Call it whatever you want.

Enter:

ssh aws.yourdns.com –l ubuntu -i mykey.pem

 

NOTE: Don’t forget the tab completion here. A shortcut for this is to enter the command below. (Of course this is assuming you’re in the same directory as the mykey.pem file. 

ssh aws.yourdns.com –l ubuntu -i my{TAB}

 

You should see a busy welcome screen informing you about some basic info about your server. It will also let you know if you’re running the most recent AMI.  Having the most recent AMI is not very important. With the built-in aptitude package manager, your server can easily stay current even running an older AMI.

Linux domU- 2.6.32-318-ec2 #38-Ubuntu SMP Thu Sep 1 17:54:33 UTC 2011 i686 GNU/Linux  
Ubuntu 10.04.3 LTS

Welcome to Ubuntu!  
 * Documentation:  https://help.ubuntu.com/

  System information as of Tue Nov  8 19:48:48 UTC 2011

  System load:  0.0               Processes:           68
  Usage of /:   37.6% of 7.87GB   Users logged in:     0
  Memory usage: 41%               IP address for eth0: 10.208.195.158
  Swap usage:   0%

  Graph this data and manage this system at https://landscape.canonical.com/
---------------------------------------------------------------------
At the moment, only the core of the system is installed. To tune the  
system to your needs, you can choose to install one or more  
predefined collections of software by running the following  
command:

   sudo tasksel --section server
---------------------------------------------------------------------

8 packages can be updated.  
7 updates are security updates.

A newer build of the Ubuntu lucid server image is available.  
It is named 'release' and has build serial '20110930'.  
Last login: Mon Nov  7 21:31:53 2011 from 75.81.113.113

 

As you can see, I have a few packages that are available for update. So the first thing I like to do is immediately update the server. Do that with two commands. Apt-get update and apt-get upgrade. Update will re-sync the package cache for the local repositories. Upgrade will actually download and install the updated packages onto your system. Dist-Upgrade, another form of upgrade, will force update all new packages regardless of dependencies. This is typically something you’ll use when you start with a fresh server. After that, the normal “upgrade” should work fine. You can combine these commands into one by concatenating them with “&&”.

sudo apt-get update && sudo apt-get dist-upgrade -y

 

This will update the repositories; and if successful, will continue to the upgrade command and install the updates.  The option “-y” simply instructs the upgrade to automatically answer yes to installing package updates.  We’re running the command with sudo to temporarily enable root privileges. We’re “su” “doing” the command.

 

Install Mono & Apache

Now that we’ve updated the system we can continue to install all the necessary packages for the web server. I’m choosing to use Badgerports for the Mono install. Optionally, you can compile and install your own (sometimes more recent) version of Mono, but Badgerports is very convenient and easy to use.  At the same time, we’ll install Apache.  Enter these commands sequentially (one per line)

wget http://badgerports.org/directhex.ppa.asc  
sudo apt-key add directhex.ppa.asc  
sudo apt-get install python-software-properties  
sudo add-apt-repository 'deb http://ppa.launchpad.net/directhex/ppa/ubuntu lucid main'  
sudo apt-get update  
sudo apt-get install mono-apache-server4 mono-devel libapache2-mod-mono 

 

Setup Web Folder

Now configure the directory where we’ll place the web files. Enter the following commands to create the directory and set its ownership and permissions.

cd /srv  
sudo mkdir www  
cd www  
sudo mkdir default  
sudo chown www-data:www-data default  
sudo chmod 755 default

 

Configure Apache & Virtual Host

Finally, setup the apache virtual host to run our website.  We’ll start by creating our virtual host configuration file that enables the 4.0 mono server for the web directory we just setup. We’ll than set it up nicely with the debian/ubuntu environment so we can easily disable and enable it.  For this sample, I used mono-project’s mod_mono configuration tool as a starting point. I then switched its server command to mod-mono-server4 (for 4.0 runtime), and I also changed its DocumentRoot to use the directory we setup.

 

<VirtualHost *:80>  
  ServerName my-mono-server.somewhere.com
  ServerAdmin web-admin@my-mono-server.somewhere.com
  DocumentRoot /srv/www/default
  MonoServerPath my-mono-server.somewhere.com "/usr/bin/mod-mono-server4"
  MonoDebug my-mono-server.somewhere.com true
  MonoSetEnv my-mono-server.somewhere.com MONO_IOMAP=all
  MonoApplications my-mono-server.somewhere.com "/:/srv/www/default"

  <Location "/">
    Allow from all
    Order allow,deny
    MonoSetServerAlias my-mono-server.somewhere.com
    SetHandler mono
    SetOutputFilter DEFLATE
    SetEnvIfNoCase Request_URI "\.(?:gif|jpe?g|png)$" no-gzip dont-vary
  </Location>
  <IfModule mod_deflate.c>
    AddOutputFilterByType DEFLATE text/html text/plain text/xml text/javascript
  </IfModule>
</VirtualHost>

 

We’re going to use the local clipboard to copy and paste the contents into VIM through the msys window. With the default MSysGit client, it uses a Windows “cmd” shell so you can right click, “Paste” or enable quick edit mode to paste into the window.  While working in VIM, if you have an problems with keystrokes, just hit {Esc} a couple times and you should be able to re-enable insert mode by pressing {i} once. Enter the following to open a new file in VIM.

cd /etc/apache2/sites-available  
sudo vi mono-default

#copy the contents above and paste them into the new file
{i}{Then paste - right click the window and select paste}
{Esc}:wq{Enter}

Now we can create a symbolic link to this file from the enabled directory.

cd /etc/apache2/sites-enabled  
sudo rm 000-default  
sudo ln -s /etc/apache2/sites-available/mono-default 000-mono

 

Install Trusted Certificate Authorities

If we want to use the AWS tools in our application, we’ll need to first install the trusted keys. We’ll use the mozroots tool to do that. We’ll use “sudo –u www-data” to run mozroots as the www-data user.

Enter:

sudo –u www-data mozroots --import –sync 

 

As a simple test to ensure the basic configuration is working, we can move the normal default web page into our new directory and then make a request against it.

sudo mv /var/www/index.html /srv/www/default  
sudo vi /srv/www/default/index.html 

# arrow down a couple lines to content
# press {i} then enter something. just to make it a little different
{Esc}:wq{enter}

# restart the web server sudo service apache2 restart 

If the web server restart reported “OK” then make a request against your server (from your local machine) to test the web server configuration. Open a browser and browse to http://<your-host-name/index.html.  You should see your slightly modified default page. Now we can continue on to building out the default MVC3 web application and test that.

If the web server did not restart okay, read the messages that were reported. Also, if you’re not seeing anything there you can always refer back to the apache logs located at: /var/log/apache2/error.log.  If anything goes wrong throughout this entire process, always check the end of that file.

 

Deploy your first site

Create and Publish a New MVC 3 Project

Finally… We’re ready to deploy a real .NET MVC3 application on Linux. We’re going to be doing a typical BIN deployment of MVC 3. I have detailed this out in another blog post and covered some of the potential issues you could run into while BIN deploying an MVC 3 project to Mono.  Feel free to refer to that post if you having any problems. Lets get started here by creating a new project.

Open Visual Studio and create a new Project. Choose .NET Framework 4, then ASP.NET MVC 3 Web Application. Choose Internet Application template. Then select the Razor view engine and optionally the HTML5 semantic markup.  We’re just going to roll this application as-is. You do not need to create the unit test project.

Under the Solution Explorer, expand the References node. We need to enable Copy Local on a few necessary assemblies for this to work in Mono. Go to properties of the following assemblies and set their Copy Local option to True: System.Web.Helpers, System.Web.Mvc, System.Web.Routing, and System.Web.WebPages.

Now publish the project. Select the Build menu, then Publish AWSMonoSample.Web. Choose Publish Method: File System and select a local directory for copying the published files. I also typically select “Delete all existing files prior to publish” to ensure a clean publish directory.  Then click Publish. (In my sample, I’m publishing to C:\temp\AWSMonoSample.Web).

To deploy an MVC 3 site, we’ll also need to collect a few more MVC 3 and Razor dependencies. Browse to: Program Files\Microsoft ASP.NET\ASP.NET Web Pages\v1.0\Assemblies directory and grab System.Web.WebPages.Deployment.dll, System.Web.WebPages.Razor.dll, and System.Web.Razor.dll. DO NOT grab any other assemblies. Place all these assemblies in your published folder’s bin directory.  You should only have to do this once, but it’s always a caveat of deploying MVC 3 to Mono on a new server.

 

Deploy Published Files to Your Server

Now we can upload the files.  Since AWS by default is public key authenticated, we’re going to use the manual command-line method. Once you understand public key authentication a little better, tools like FileZilla or WinSCP will work a little easier (if you like the GUI interface thing).

So to start, open My Computer / Windows Explorer and browse to your published directory (C:\temp\AWSMonoSample.Web for this example). Then right click on that directory and select Git Bash Here from the context menu. You should now be looking at a local BASH command line with your working directory being the deployment folder we just setup.

For this deployment, we’re going to first package the files into a gzipped tarball, then we’re going to “scp” upload it to our server. From that point, we’ll ssh back into our server and extract the files in the web directory we setup earlier.

So enter:

$ tar -zcvf aws.tar.gz *  
$ scp aws.tar.gz ubuntu@your-aws-hostname: -i ~/.ssh/mykey.pem

 

NOTE: Replace the hostname with the one you have setup for your AWS EC2 instance. Also replace your keyfile path with the one you have setup locally for AWS.

You should see a successful transfer. Now you can connect to your server and extract the files in place. Open a new (or use an existing SSH connection if you still have one open). Start from your home directory with the command: “cd ~/”.

ssh ubuntu@your-aws-hostname -i ~/.ssh/mykey.pem  
mv aws.tar.gz /srv/www/default  
rm index.html   #remove test from earlier  
tar -zxvf aws.tar.gz  
rm aws.tar.gz   #optional cleanup

sudo service apache2 restart      #for a fresh start

 

That’s it! Now make a request against your server, and you should see the new MVC 3 sample website displayed! Keep in mind the membership is not wired up and the registration form may blow up on you (as it also would in Windows); but otherwise, the basic MVC stuff works including model validation.

 

Useful Links