Converting ASP.NET WebForms to ASP.NET MVC 4.0

Monday, April 22, 2013 / Posted by Luke Puplett / comments (2)

This is a blog-in-progress while I try to convert an ASP.NET WebForms application to MVC 4. It may completely fail or I may give up, but I thought it might help to share my experiences.

What I am Migrating

It’s a one project ASP.NET WebForms 3.5 site. It’s pretty simple, uses the old Ext JavaScript framework, which became Sencha UI, I think. There’s a fair few pages but not a lot of HTML in each, since its built in XSLT (vomit) from XML coming from an XML database. Much business logic is in the data-layer (vomit II).

Strategy

My bright idea is not to convert. I don’t think that’s the easiest route, I just don’t know what’s needed for an MVC app, and I want the IDE to be in MVC mode, with the context menu support for views and stuff, which probably won’t happen if I just add some DLL references and setup some routing.

So, I will make a new, empty MVC 4 app and copy in the files from the old world. I know MVC is happy to serve-up ASPX forms pages and controls, and that’s all a WebForms site is – just some ASPX pages and some handlers, maybe some URL rewriting.

Start

So far, I have:

  • Created an empty, new ASP.NET MVC 4.0 project.
  • Set the same project references and NuGet packages.
  • Set my solution and project configurations for staging/QA.
  • Copied over all the stuff from the old Web.config that I think are non-standard, i.e. added to support the old app. I already did this, so its hard to blog at detail but its actually pretty simple.
  • Begun to copy over the basic, high-in-the-dependency-graph controls and pages.

Copying Stuff Across

I have copied /MasterPages and its children, /Classes which are just some .cs files with helpers inside, /Controls which are Web User Controls or ASCX files as well as the default.aspx (all come with their code-behind and designer).

Problem 1 – Solved

In copying the files, drag and drop, from the WebForms project in the same solution, the IDs of the controls on the ‘pages’ (in the ASPX or ASCX files) are not being ‘seen’ in the code-behind. By that, I mean there are red squigglies in the C# wherever they are referenced – its like the controls on the pages are not being compiled.

I reconstructed a control manually, by adding a new one with a different name and copying over the important mark-up and code. This was fine, so MVC is cool with it, just doesn’t like it being copied file by file.

So I figured that it must be related to the designer file. The file doesn’t sit at the same level in the Solution Explorer as the manually created good one, so there’s something odd going on. Opening the designer.cs file is fine but the code doesn’t respond to mouse-overs – its lifeless like a text file.

Solution: The trick is to delete the file and then right-click its parent AS?X file and hit Convert to Web Application which forces regeneration of the designer.cs.

You can copy a load in and then convert at the folder or project level, too, donchaknow.

Problem 2 – Solved

The default route and getting default.aspx to be the page shown at the domain root. This one is easy, although I’m not sure its the proper way. Simple add this route.

routes.MapPageRoute("HomePage", "", "~/default.aspx");

Problem 3 – Solved

Settings in httpHandlers not working, i.e. still going via the routing system. So this site has a load of magic setup in the web.config to make friendly-URLs happen. Of course, this needs to be re-considered in an MVC world, but we’re talking about things like blah.xml which invokes a special handler – its all custom stuff for this old site.

The solution was two step:

- Add the following line to not route requests:

routes.IgnoreRoute("{resource}.xml");

- Also need to update the types in the httpHandlers section in web.config

<add verb="*" path="*.xml" type="Company.XmlHandler, SiteDllFile" />

- To

<add verb="*" path="*.xml" type="Company.XmlHandler, NewMvcSiteDllFile" />

Problem 4

The form values security and validation seems to have been tightened-up in ASP.NET 4.0 or something, because I was getting an exception when reading Form values containing XML fragments. This was remedied with this config setting:

<httpRuntime requestValidationMode="2.0"/>

Problem 5 – At this stage, there has been no problem 4

With everything else copied over and some shared components refactored out into a shared library, everything else is working.

Labels: , ,

Helpful docs when hardening an IIS web server

Sunday, November 21, 2010 / Posted by Luke Puplett / comments (0)

While clearing up some crap on my desktop I came across a note listing some documents I’d used when preparing my web server for co-location and exposure to the harshness of the public internet.

How To: Harden the TCP/IP Stack
http://msdn.microsoft.com/en-us/library/ff648853.aspx

TCP Receive Window Size and Window Scaling
http://msdn.microsoft.com/en-us/library/ms819736.aspx

How To: Protect Forms Authentication in ASP.NET 2.0
http://msdn.microsoft.com/en-us/library/ff648341.aspx

How To: Perform a Security Deployment Review for ASP.NET 2.0
http://msdn.microsoft.com/en-us/library/ff647403.aspx

How To: Use IPSec for Filtering Ports and Authentication
http://msdn.microsoft.com/en-us/library/ff648481.aspx

How To: Use IISLockdown.exe
http://msdn.microsoft.com/en-us/library/ff650415.aspx

Labels: , , ,

Recent Changes to vuPlan.tv Client

Thursday, October 07, 2010 / Posted by Luke Puplett / comments (0)

Since writing vuPlan.tv I’ve been using a placeholder company name ‘S26’ which I had to refactor to reflect my final company name Evoq Limited. The new codebase is built with new filenames and namespaces.

Also, the Windows 7 implementation of the Media Center API, at the COM interop level, has a memory leak. The new client works around the problem by creating an API shim inside a new AppDomain which gets recycled at intervals.

Finally, after seeing Mike Taulty’s session at UK Tech Days and his showing-off of new Metro-esque Windows 7 WPF applications Zune and MetroTwit, I reverted the vuPlan.tv client back into its Metro pyjamas. Previously I’d not done enough work on the shadow which is why it never really rocked my world; this version is much better I think.

vuPlan.tv Funkier Metro Look

Testing is on-going as it work on the web application/site. And that reminds me, I must update the images on the site.

Luke

Labels: , , , ,

ASP.NET MVC Robust HyperLinks

Sunday, September 05, 2010 / Posted by Luke Puplett / comments (0)

Motivated by ActionLink failing to produce proper MVC-style / / / links and, although RouteLink does work, I think route names and possibly all route logic should be kept out of the view, I decided to bake my own link maker, and make the links in the controller and put them in the model. Here’s how I did that.

I have a BaseSiteNameController in which helper methods go. To this, I added the following method:

public static string BuildLink(
    RequestContext requestContext,
    string routeName,
    string action,
    string controller,
    object routeValues)
{
    UrlHelper u = new UrlHelper(requestContext);

    var rvd = new RouteValueDictionary(routeValues);
    rvd.Add("controller", controller);
    rvd.Add("action", action);

    var httpContext = System.Web.HttpContext.Current;

    return UrlHelper.GenerateUrl(
        routeName, null, null,
        httpContext.Request.Url.Scheme, 
        httpContext.Request.Url.DnsSafeHost, 
        null, rvd, u.RouteCollection, 
        requestContext, true);
}

Note that I specify controller and action in the RouteValueDictionary even though GenerateUrl has parameters for controller and action. When these are used, it kicks out old-skool URLs – I think these params were designed to be used when a route name is not supplied. The GenerateUrl method smells like it wasn’t designed for general direct use.

GenerateUrl is the thing that does the magic; it looks up the route and works out how the URL should be structured.

It’s a static method because I also want to be able to call it from my models. Some of my models have sub-models and some of those have links. I figured that a View is for layout, and links kind of straddle both sides but the logic required to make them, tips them into the non-view side, in my opinion.

My models aren’t always created and prepared by my controllers. Some of my models contain a small amount of logic to populate themselves, so I want them to be able to call this method* when they populate their links (e.g. FavouriteBooksModel.AddBookLink)

Note also that the method takes a RequestContext. I now have to have this context available in my models, so I pass it down using a CustomerWebProfile class that I already use to flow important data into my models, such as TimeZone data. Each model has a CustomerWebProfile property (inherited via a base model).

*Except that my models don’t directly call BuildLink because I wrap these calls helper methods.

Under each action method, I add a method to build a link to the method. I do this because I can prevent the caller from having to know the action name, and so I can refactor it easily.

public static string BuildLinkToCustomerAccount(
    System.Web.Routing.RequestContext requestContext, string customerId)
{
    return BuildLink(requestContext, "Default", "CustomerAccount", "Account",
        new { data = customerId });
}

Now my models use these helper methods to make their links (code in model).

public string CustomerAccountLink
{
    get
    {
        return Controllers.PlannerController.BuildLinkToRecordOneTime(
            this.WebProfile.RequestContext, this.TransmissionKeys);
    }
}

If I change my controllers or actions, I can change the helper method, and renames can be done without worrying about strings in aspx pages!

My views now make links in the normal way, like this:

<a href="<%: Model.CustomerAccountLink %>" ><%: Html.GetLocalString("Your Account") %></a>


And if I change the model, my page won’t compile and I can sort them out before runtime.

Labels: , , ,

ASP.NET MVC 2 RedirectToSignin

Thursday, August 19, 2010 / Posted by Luke Puplett / comments (1)

If, for whatever reason, you cannot use the [Authorize] attribute on an action method, or as in my case, you have an unusual architecture, then this helpful method directs a visitor back to the sign-in or login screen and then re-runs the original action. It’s designed to work exactly as the AuthorizeAttribute works, but with the difference that you can do your own IsAuthorised logic within the method.

An application I’m working on has a WCF service which is where customers are logged-in. The MVC application simply packages up the web front end and ships all requests, CRUD ops, everything into service calls and then paints the results out via ASP.NET/HTML.

When a customer logs in to my MVC 2 app, it’s really just calling Login on the WCF Authentication Service – the MVC app doesn’t keep track of sessions and is truly stateless. The MVC app does, however, use forms authentication and so the cookie can say “yep, customer is signed-in” while the WCF service says, “Uh uh. This customer’s session expired.”

This means that my action methods with [Authorize] run, but then fail. I wrote this method to redirect the customer to the sign-in box, and then continue to execute the original action, using the returnUrl.

Code:


protected ActionResult RedirectToSignin(
    string returnAction, 
    string returnController, 
    object returnRouteValues, 
    RequestContext requestContext)
{
    UrlHelper u = new UrlHelper(requestContext);
            
    string returnUrl = UrlHelper.GenerateUrl(
        null, returnAction, returnController, new RouteValueDictionary(returnRouteValues),
        u.RouteCollection, requestContext, true);

    string baseAddress = String.Format("{0}://{1}"
        HttpContext.Request.Url.Scheme, HttpContext.Request.Url.Authority);

    return Redirect(String.Format("{0}/Customer/Signin?returnUrl={1}"
        baseAddress, returnUrl));
}

Now within my action method, if I get a null or a fault from my service, I return RedirectToSignin(xyz) instead of returning an error. After sign-in, the action is called again and all is good in the hood.

Labels: , , ,

The Shame of CSS (or HTML)

Tuesday, August 17, 2010 / Posted by Luke Puplett / comments (0)

Is the web built on the heroic patience of thousands of creative geniuses, or by a bunch of morons? This is what I'm left wondering after a second go at HTML and CSS.

As a programmer working in programming languages, I'm used to being a dictator of determinism.

That is to say that, what I instruct the computer to do, happens exactly as I dictate. I run the whole show. This affords me as much power as it does responsibility. If I get those instructions wrong, my programme will not compile. If my code runs, it is computationally perfect even if there are logical bugs. My world is mathematically beautiful.

Due to a problem with my web designer deciding to ignore me, I thought I could knock up a very simple two-column, header, content, footer type web-page in no time. I was wrong and the memories of why I hired a web dude in the first place, came flooding back.

For years now I've been reading the UK’s most popular web designer/developer rag .NET Magazine (nothing to do with that .NET, the irony being that the editor almost completely eschews .NET), and not once has the magazine cared to tell me that HTML is utter garbage.

Born out of SGML and XML, HTML differs in that its tags don't need to be closed. From its inception, HTML immediately repudiated it's first opportunity to work properly.

Although, to be honest, I'm not entirely sure whether it's not CSS that's to blame, since I don't know where HTML ends and CSS begins.

If CSS does the layout then CSS is to blame. I would take back my previous comment about HTML but like a couple of naughty children, they're both at fault even if the other one did it.

In retrospect, I see now that there were some clunking great CSS warning clues:

  • After 15 years of mainstream web content creation, no one has made a decent WYSIWYG designer.
  • CSS is often referred to as being "hand crafted" (in the way that chiselling wood is semi-random).
  • "Hacks" are the everyday vernacular of a web designer.

The simplest tasks are nigh on impossible. Like centering. Particularly centering some content in relation to some other thing to the side of it.

Moving something without nudging something else somewhere else you didn't want it or bringing some other thing along for the ride.

Grids. You used to be able to use Tables which worked really nicely but Health & Safety came along and told everyone that they must use a combination of DIVs and strong painkillers.

And fonts are out.

Please stare in amazement at this very simple attempt to put a logo in the corner of a page with a band of grey going along to top.

Designer

Looks alright doesn't it? I mean, the logo doesn’t have it’s alpha background but that’ll be alright in the real browser.

Here’s how to make this amazing page. You'd think to just put a logo image in the corner and make it a link. You can’t just make something a link, of course, so instead the HTML and CSS instructions say: stick a hyperlink in the corner and then set its background picture so it looks like a logo but have no link text and then fuck around a bit to make the link the same size as the background image and then add some margin. Also use someone else's 960 Grid System thing to shortcut the almost impossible process of aligning stuff.

As you go about "crafting" the page, the designer will show you what you want to see because it doesn't like it when you're mad, but really all your hard work looks like this:

IE8

Quite how Internet Explorer 8.0 - the same people that wrote the designer and Microsoft's best attempt to make a working web browser - manages to screw this one up so spectacularly is a mystery. Overlapping images? WTF?!

Okay, Firefox is the web developer's favourite PC browser, let's see what it looks like properly rendered.

Firefox

Yeah, erm. Almost. Firefox very nearly nailed this complex grey box and logo. Good attempt, gold star for effort. I have no idea how adding margin to my logo-cum-hyperlink managed to add a margin to the DIV that is the hyperlink's parent's parent but it's the equivalent to me doing a vasectomy on my granddad before my dad was born.

Of course stuff in the mark-up is never born. Not unless you're using a different language: XAML. I am used to XAML. I like XAML. Extensible Application Markup Language was written by the people that gave us the world's worst browser, however, XAML works perfectly.

In XAML, each tag is actually an instance of an object in memory. Each thing is created (born) when the XAML is processed, and things inside things, really do become the children of their parents. The system is tied to the underlying programming language which is compiled and so must be perfect.

I can create XAML elements in code, and I can create code elements in XAML. It’s a mark-up language for creating object graphs.

The XAML designer is perfect, stuff can be moved on the page without it writing the movements to the wrong place, and it shows reality.

By comparison, designing in HTML and CSS is like writing upside down with your wrong hand while blindfolded with only 4 lying bastards to assist you.

HTML 5 and CSS 3 aren't going to help much. They offer only a few extra commands to allow such extravagances as round corners and Flashless video.

To truly take the web into the future, the whole system needs to be bulldozed. As HTML 5 and CSS 3 have taken years and years to get this far (nowhere) then a new great system will never be the product of the committee. It would have to be a disruptive innovation from a team of just a few.

N.B. If there are unusual breaks throughout this document, it's because Windows Live Writer and Blogger can't decide how to format the HTML. Or maybe its the CSS.

Labels: , , ,

Email to Phil Haack on MVC Routing

Wednesday, August 11, 2010 / Posted by Luke Puplett / comments (1)

Hi Phil

Congratulations on MVC, I can say that it's the first major MS framework I've enjoyed in a while, due to its simplicity.

Can I suggest a non-matching style routing system for MVC 4?

The route-matching logic is prone to problems, see link, and although there may be an answer, I don't care; as a 'user' of your technology, I want to go home happy and make progress on my project. I just want it to work.

My idea is to have an attribute on each and every action method that sets the route, optional default values for the params and IsDefaultAction for the controller.

A full list of all controllers and routes can then be made and the need to 'match' is eliminated.

The only problem might be that attributes are fussy about using constants, so maybe this would suffice if the attrib won't take an anonymous type:

[Route(Format = "{controller}/{id}/{action}")]
[DefaultAction]
[DefaultParamValue(Param = "id", Value = "")]
public ActionResult ... (string id) { ... }

And for, controller-less and action-less URLs:

[Route(Format = "{year}/{month}/{day}")]
public ActionResult ... (int year, int month) { ... }

Which would throw because the day parameter is missing, making route misconfiguration easier to discover. The Controller and Action method to call is inferred from what the attribute decorates. If you wanted to allow it, the controller no longer needs to be named XyzController.

Furthermore, if someone defines two controller-less and action-less routes, both with 3 params, then this can be caught and thrown when the route table is built instead of only being discoverable when a request comes in.

For example, if I also add to a different method:

[Route(Format = "{country}/{case}/{agent}")]

Then it should detect that this conflicts with the one above. There's no controller name or action name in the URL to assist routing and both take 3 params.

I'm sure I've missed out some key things that the current system permits, but as I said, I don't want it to be smart and enigmatic. I want it to work, or clearly direct me to the problem when I mess up.

Thanks for listening.

Luke

Labels: , ,

Adding a UserAgent to WCF Clients

Thursday, July 22, 2010 / Posted by Luke Puplett / comments (0)

A quick post to show how to add a UserAgent to a WCF call so that it can be inspected on the server side, perhaps to see which versions of clients are calling your service.

And completely free of charge, I'm including some extraneous code I use to show how service method calls can be made without having to jump through hoops every time.

The code

        public T CallServiceMethod<T>(Func<T> methodCall, bool canExpectNull)
        {             
            T response;

            using (OperationContextScope scope = new OperationContextScope(this.ServiceClientChannel))
            {                 
                HttpRequestMessageProperty p = new HttpRequestMessageProperty();
 
                p.Headers.Add(System.Net.HttpRequestHeader.Cookie, this.AuthenticationCookie);
 
                p.Headers.Add(System.Net.HttpRequestHeader.UserAgent, typeof(ServiceHelper).Assembly.FullName);
 
                OperationContext.Current.OutgoingMessageProperties.Add(HttpRequestMessageProperty.Name, p);

                this.IncrementCallsInProgress();
                try
                {
                    response = methodCall.Invoke();

The blurb

I'm sorry about the broken lines - I so nearly picked a full width Blogger template, too. The method above essentially wraps a delegate invocation in some calls into WCF's OperationContext which adds the headers. It's interesting to look at the OperationContext in the debugger, much as you probably did with the HttpContext when first looking at an ASP.NET app - its sort of the equivalent but in reverse.

The method's ending isn't shown, I'm lazy like that, but it just catches errors, the finally block decrements the calls-in-progress counter and there's some logging.

On the server, I use the HttpContext.Current.Request.UserAgent string to log which client versions my customers are running. Useful.

Notice that I'm also adding a cookie which I store in the class that this method is part of. I'm using the built-in AuthenticationService which uses Forms Authentication and thus, cookies. This is not required in Silverlight as the IE stakc stores and reapplies appends any cookies received, automatically.

To use this method, I instantiate my service client proxy and then call ServiceHelper.CallServiceMethod( () => { return proxy.SomeMethod(xyz); });

The proxy call is thus invoked within the context changes above.

Labels: , , ,

Setting-Up WCF Over SSL on IIS 7.x

Monday, July 19, 2010 / Posted by Luke Puplett / comments (1)

This is a short post is about the steps I had to take to switch an existing test WCF service over to a secure staging version that more closely mimics how it’ll be in the production environment. I hope to include some things they didn’t tell you in the instruction manual, mainly concerning the use of test certificates.

It’s stuff like this – configuring a secure web service – that makes me dislike WCF and IIS quite a lot. I liked Web Services, and Remoting, I even found raw sockets surprisingly easy but WCF is hard work. It’s the sort of thing that needs its own UI, management tools, and server, but to do this would arguably constrain its power.

  1. Add a self-signed certificate to the server by opening IIS Manager, highlighting your server name in the left pane and locating Server Certificates on the right and choosing Create Self-Signed Certificate then following the instructions.

  2. Add a new binding to your site for HTTPS and note that there’s no option for the host name.

  3. WCF cannot handle multiple bindings to the same scheme, as IIS and ASP.NET sites can, so if your WCF service is hosted downstream of your main site, such as from a virtual directory underneath your root domain and site, then add the following to your web.config which will filter the bindings so WCF sees just two:


    <system.serviceModel>
    <serviceHostingEnvironment>
    <baseAddressPrefixFilters>
    <add prefix="http://wwwdev.dom.co.uk" />
    <add prefix="https://wwwdev.dom.co.uk" />
    </baseAddressPrefixFilters>
    </serviceHostingEnvironment>
    </system.serviceModel>
  4. And now the bit that seems never to be explained and doesn’t seem to merit a proper UI in IIS, binding the certificate to the host header. Run this at a command prompt (one line):


    appcmd set site /site.name:"Main Site" /+bindings.[protocol='https',bindingInformation='*:443:wwwdev.dom.co.uk']
  5. Do not remove the original HTTPS binding. When I did, I got this error:

    An error occurred while making the HTTP request to https://wwwdev.dom.co.uk/xml/AuthenticationService.svc. This could be due to the fact that the server certificate is not configured properly with HTTP.SYS in the HTTPS case. This could also be caused by a mismatch of the security binding between the client and the server.

  6. Now, if you’re using the built-in AuthenticationService, in your web.config or web.staging.config, make the following change/addition:


    <system.web.extensions>
    <scripting>
    <webServices>
    <authenticationService enabled="true" requireSSL="true"/>
  7. Still in this document, find the binding element for the service and set it to use Transport security – the bottom lump of mine looks like this:


    <binding name="ssl">
    <security mode="Transport" />
    </binding>
    </basicHttpBinding>
    </bindings>
  8. Close and save all that and then make the same change on the client. Personally, I don’t use config files for public ‘out there’ apps so the change is made within a service client factory using the following code:


    if (withSsl)
    basicBinding.Security.Mode = BasicHttpSecurityMode.Transport;
  9. Add the following non-WCF specific code at some point in your app which essentially just accepts any certificates as valid even if they’re downright dodgy (so make sure not to let it leak into production).


    #if DEBUG || STAGING
    System.Net.ServicePointManager.ServerCertificateValidationCallback = (se, cert, chain, sslError) => { return true; };
    _log.Warn("A pre-release option has been set: server certificates are no longer being checked for validity by this client app.");
    #endif
  10. You do not need to add code that modifies the Authentication settings for the ServiceCertificate on the ClientCredentials object of a service client (ServiceBase<T>).

  11. Now test your service. You may want to use a tool like Fiddler to inspect the HTTP traffic.

Labels: , , ,

How to: Send Email from a Microsoft Server Application

Friday, June 18, 2010 / Posted by Luke Puplett / comments (0)

If you have a server application, web site or even a job or task running on a Windows server, you may need to send out emails. For me, it was an ASP.NET MVC 2 application; I needed to send an email to people who stuck their address in the interested parties register on the coming soon page on vuPlan.tv to confirm it was a genuine request. But you could have a build server that you want to email out a report from or some other thing. This guide is more about the server side setup than the SMTP client you send, although I will give examples in .NET and DOS, via Blat.exe.

To make things easy, I’ll bullet the whole process.

  • First, ensure that you cannot use an existing Exchange Server or MTA in your organisation. It makes life so much easier when someone else deals with these things because, as the person coding or scripting, you should only care about getting the job done.

  • After you’ve failed to convince the jobsworth in IT security, install your own one. In Windows Server 2008 (Web Edition, also) open Server Manager > Features node > Right-click > Add Feature > locate SMTP Server and install the requisite IIS 6 components as well as Telnet Client.

  • If you are using a hosted or public internet facing server, you should now ensure that your server had a PTR Record for its IP address. The people that gave you your RIPE internet address can do this (usually your host or ISP). Do this now so they can get started.

  • To prevent spam, most MTAs will do a reverse DNS (rDNS) lookup on your server to ensure you have the same domain name as you claim to be sending email from. The style of address your server has can also trigger a spam detector, so avoid a reverse DNS address such as 100-99-98-97.dynamic.mydomain.com as it looks like the dynamic ranges given out to broadband home users (who are blacklisted on SMTP servers by default).

  • Check http://spamhaus.org and make sure that your IP address is not blacklisted before you begin.

  • Set the Simple Mail Transport Protocol service to start automatically.

  • You may want to add an smtp CNAME into your DNS server so you can move it in future without editing code or config files.

  • In IIS 7 Manager, locate your site and then the SMTP E-mail icon and set the email address and check the radio button to deliver to localhost on port 25, or the DNS alias you setup. Two things to note: setting delivery to a pick-up directory is useful on development boxes when you don’t want to flood the mail servers with real email. The .eml files can be opened in Outlook. Also, the other settings on this page may actually be ignored by clients/your app, but I’m setting them in any case.

  • Open IIS 6 Manager and you should just have the SMTP node in there.

  • Right-click the SMTP virtual server and go to Properties.

  • Tab 1: Tick Enable logging and set a valid path.

  • Tab 2: Under Authentication, set Anonymous access only for the time being. Under Connection, check All except the list below and under Relay, check All except the list below and tick Allow all computers at the bottom.

  • Tab 3: Check the Badmail directory you can fill in the Send copy… box but it may not work anyway, at first. Leave the other settings.

  • Tab 4: Set all timeouts and delays to minimum. This is so the server stops trying and gives you an error to look at quickly. You should reset these after setting up/troubleshooting. Enter Outbound Security and make sure that its set to Anonymous.

  • You should review the other tabs but otherwise leave them for now and click OK.

  • Now rename the domain to just the domain part and close IIS 6 Manager.

  • Open a DOS prompt and test that your SMTP server is listening. telnet localhost 25 or whatever your server name is. Note “Microsoft ESMTP MAIL Service, Version: 7.5”: 7.5 eh? What’s all this needing IIS 6 all about then?

  • Download Blat.exe and ‘install’ a default profile like so:

    blat -install -f fromaddress@mydomain.com -server localhost -port 25 -try 1 -profile myprofile
  • Now you can try sending a test message like so:

    blat - -to me@hotmail.com -f me@mydomain.com -p myprofile -body "This is a test." -subject "This is a test message."
  • Hotmail has a very clearly written Non-delivery Report (NDR) message, which is why I choose them, also you’ll hear the ping from Live Messenger if it works.

  • Remember to check your Junk folder!

  • Open Event Viewer and ensure nothing was logged by the SMTP service. I use a catch all filter for “Everything in the last hour”.

  • Open the c:\inetpub\mailroot folder and keep an eye on what’s in each of the folders. Personally, I’ve found that Badmail doesn’t seem to gather anything and that the Drop folder has the most interesting information. EML files can be opened in Notepad and the Drop folder is like the Inbox; it will receive NDRs that contain error messages.

  • Also, check the SMTP service log file. A successful connection should be visible in the log as OutboundConnectionResponse lines. If these are not there, then it is having a problem connecting to the remote mail server.

  • Check that the remote MTA is up. Use http://mxtoolbox.com to find the MX record and thus the mail server for the recipient. Check it’s up.

  • Ensure that port 25 is open in your firewall.

  • Attempt to telnet to the remote server.

  • Give up and use http://Jangomail.com

When you get Blat.exe sending successfully, then you can use the SmtpClient class to very easily fire out your emails from your ASP.NET app.

Good luck!

Labels: , , ,

Azure vs Hosting: Bang for Buck

Tuesday, May 18, 2010 / Posted by Luke Puplett / comments (1)

Following on from my last posting about the cost of running an Azure hosted little website I have done some more calculations.

As it turned out, my previous favourite, Fido.net had run out of servers and wanted more money than I was prepared to part with for the next rung up the server ladder.

Previously, I had toyed with the idea of running my own server. I can do this with Fido but they want me insured, and when I spoke to insurance people, they said that it was a waste of time as it would be incumbent on the OEM of the server that set fire to the building.

Redstation do co-lo and its cheap. You get a 100Mbit line and around 4Tb traffic cap, a 1U space in the rack and 8 IP addresses (5 usable), 24/7 access and free tea and coffee from the machine.

Couple them with a little server built here and I could be onto a sure fire value winner.

A custom PCN built 1U server with my own added reliable SSD drive will set me back $775. That's a quad core box with 8Gb RAM.

I can also load it up with my own BizSpark licensed software, which is not an option when you rent a server. Plus I can add another server some day and split out the SQL duties – although I’m getting ahead of myself.

Let's see how it all stacks up. Sorry for the micro text.

BangBuck

Unless I’m mistaken, and the point of a blog is much to do with airing thoughts so that others can give theirs, Windows Azure is exceptionally bad value.

I’m now using the Compute time as “time in existence” of the VM. And when the VMs are so puny, and the price is so large, 7p an hour for 1.6Ghz and 1.75Gb RAM, its not good.

When I first looked at Microsoft’s Azure platform, I thought it spelled the end for traditional hosters. Evidently not. I thought it was a low barrier to entry way for mobile app makers to get their apps into the cloud.

They have missed the opportunity to be truly disruptive in this market and charge a base rate plus the amount you use. Being MS, they own the OS and have the power to really accurately charge by utilisation, and auto provision at times or duress. At the moment, the value proposition is in the quick provisioning of servers and would benefit a company that gets massive influxes of traffic for short periods, like ticket sales.

Anyway, this is as much about Azure as it is about hosting options for a small website/service, and so it now looks like building a cheap server and paying for the rack space is the most cost-effective solution.

Until I sign a contract though, it could change. One thing that rented boxes provide is a little more peace of mind from driving down to the chilled room with a screwdriver at 2am. Although from my experience, hardware doesn’t fault that easily and its almost always a dodgy spindle.

With SSD drives, I hope to eliminate that.

Labels: , , ,

Compare the Meerkat: Windows Azure Cost Planning

Saturday, May 15, 2010 / Posted by Luke Puplett / comments (2)

Please read the comments. This was written before I confirmed that Azure compute time is uptime, not CPU time actually used.

I have another post written after establishing the above fact.

This weekend I had planned to get my data into the cloud, Microsoft's cloud to be precise, but was confronted with Microsoft's version of an online shopping service before I could provision my little slice of the cloud. Probably foolishly, I was expecting to just walk right in with my Live ID and MSDN sub, but it gave me the opportunity to compare the Windows Azure prices with the rest of the colo and hosting market.

As with any cost planning, a load of assumptions have to be made about capacity and requirements. In the ordinary world of hosting, this means basically wondering if you'll use all the RAM on the supplied box or not, but with the Windows Azure model so granularly broken down, it makes it slightly more trifling.

Azure has prices for storage transactions as well as for things like AppFabric Access Control transactions and Service Bus connections. For the sake of my planning I have conveniently pretended that they're not there in a sort of ignorance is bliss line of thinking. Well, I actually don't think I'll use this stuff for my project, yet.

I'll delay no longer and get to the interesting part. For £129.22 I'll be getting an equivalent of a 1Ghz processor running at full chat for a month, as well as 1Mbit of full chat bandwidth, 0.1Mbit incoming and 50Gb storage used. Oh and not forgetting 10Gb of SQL Azure - 1Gb being really rather too measly for anyone's use of a full blown RMDBS.

AzureSmallScenario

What's perhaps interesting is that MS don't seem to charge for extra RAM. To access more RAM you take a bigger instance with more processors, but if your workload is the same the clock cycles will cost the same, albeit spread over more cores.

For comparison's sake here's a similar costing scenario I did with a bunch of hosting companies in and around the UK. Stop squinting.

2009HostingComparison

The company I has previously chosen was Fido.net which charges about £100p/m for a dedicated dual core box with a single 500Gb HDD and Windows and SQL Server 2008. They give you 2Mbit for about £30p/m which equates to around 600Gb in/out data and 5% of your time you can be bursting to full 100Mbit.

Fasthosts might work out cheaper and CloudHosts have a very good reputation but little Fido.net was setup by an ex colleague and while I don't get mate's rates, I do feel that his small company won't have a saturated network and a lack of care for any woes I might run into.

And regarding that bandwidth, most companies give an unmetered 100mbps mbit (whatever) connection and say that its shared but also that "no one has ever come near to rinsing it" - which makes me think that 1Mbit might probably suffice in my situation and thus the amount assumed in my Azure scenario.

Half Conclusion

Fido.net's £100 + £30 is very much like Azure's £129 and so this leaves me in a quandary. I could go with Azure and its instant scalability and other features to plug into etc. but that £130 in the Fido purse gives me much more than what I get with Azure, if I choose to use it. Remember that with Azure I'm pricing that exact usage, whereas with the others, there's a lot more room left before I need to buy and bigger or another dedicated server.

If I were to compare the cost of a fully utilised dedicated server with Azure, the dedicated box would win hands down. And therein lies the rub: those damned usage assumptions.

Mostly though, I am put off by that SQL Azure price; £60p/m for 10Gb when normal hosters will dish out the whole server and SQL Server with the freedom to fill the whole disk with a 250Gb database if you so desire.

I hope this gives someone food for thought, even if my costings aren't terribly scientific.

P.S. Doesn't factor in the 'offers' that Microsoft have for new joiners, which are about 50% off for 6 months ish.

Labels: ,

Note to self: Web Page Performance Optimization - Notes from MIX10

Tuesday, March 23, 2010 / Posted by Luke Puplett / comments (0)

My personal notes from Jason Weber's session at MIX10. Essentially just a bulleted list of the 20 or so recommendations he made.

  • Compress HTTP but never images.
  • Provide cachable content with future expiry date.
  • Use conditional requests, GET if modified since.
  • Minify JavaScript; can reduce by 25%.
  • Don't scale images, resample them.
  • Use image sprites.
  • Avoid inline JavaScript.
  • Linking JavaScript in head prevents HTML render, or use defer="defer"
  • Avoid embedded styles.
  • Only send styles actually required by the page - server side plug-ins can automate this.
  • Link CSS at top in head.
  • Avoid using @import for hierarchical styles, place them in HTML head.
  • Minimize symbol resolution, cache in method body. Functions also; create function pointers in local method.
  • Use JSON native methods if available.
  • Remove duplicate script file declarations.
  • Minimize DOM interactions, cache references.
  • Built-in DOM methods are always faster.
  • Use selector APIs.
  • Complex element selectors are slow; favor class selectors and child instead of descendant.
  • Apply visual changes in batches.

Labels: , , , , , ,