Note to Self: Raw dump of all my old Azure notes

Friday, February 27, 2015 / Posted by Luke Puplett / comments (0)

This is just a dump from some OneNote pages I took over that last few years. It's completely unstructured and here for my own use, and for anyone else that might find some value in it.

Azure Services

  • Big data: Hadoop
  • Database: SQL Azure
  • Storage: Tables, Blobs, Files, DocumentDB
  • Traffic: Traffic Manager, Networking
  • Caching: Redis, AppFabric
  • Messaging: Service Bus, Queues
  • Identity: AD, ?
  • Media: Media Services and streaming
  • Hosting: CDN, Websites, Cloud Service Worker Roles, WebJobs


WebJobs

Background processing in Azure Websites

Deployed with Azure Websites.

  • Starts with an Console Application.
  • Install NuGet: Microsoft.Azure.WebJobs
  • Depends on Azure Storage, so brings in dependencies.
  • Add connection strings to storage and dashboard, use two storage accounts.
  • Need to setup the accounts in Azure portal, empty is fine.

static void Main()
{
  var jobHost = new JobHost();
  jobHost.RunAndBlock();
}

public static void HelloWorld([QueueTrigger("helloworld")] string message,
[Blob("hello/world.txt")] out string outMessage, TextWriter log)
{
  // When someone puts a message on queue.
 
  string result = "The message is " + message;
  log.WriteLine(result);
  outMessage = result;
}

  • You can run this locally.
  • Setup the queue.
  • New queue, "helloworld"
  • Added a new message via context menu on queue in Azure Management Studio.
  • Hit F5 and Job host starts and runs the block.
  • Peeks and triggers and pulls and sends into the HelloWorld method, no queue-reading code is needed!
  • That's it.

Running in Azure

  • "Always On" needs a paid website.
  • Console Application, right click it, Publish as WebJob…
  • Continuously, Scheduled, Demand.
  • Continuously is for queue trigger.
  • Create a new website, or attach it to existing.
  • Only a WebJob is hosted if new site.
  • Can also add a WebJob as a zip file via portal.
  • Can use other languages, like a PowerShell script.
  • With C# we can use diagnostics, logging etc.

In the olden days

  • We used a CloudService, a worker role. Still can.
  • A cloud service is a way to package up applications for Azure, in Roles.
  • Inherits from RoleEntryPoint, needs OnStart, Run, OnStop, lots of manual coding for reading config file settings, connecting to cloud storage account, queue account, creating it if not exist, also connecting to blob storage, container, etc.
  • Running in a perpetual loop with own sleep/delay logic.

Azure Caching

  • VM with manually installed cache like Couchbase.
  • New! In the store, there's Add-ons, Memcachier: New > Store > Memcachier, select plan, failover etc.
  • Install NuGet package, Enyim Memcached.
  • Redis Cache Solutions for Azure: available as a service, has grouping and cached lists addressable via own item index, pub/sub messaging, batch transactions,
  • Redis Service has tiers for throughput and failover, 250MB-53GB, master-slave basic option for replication + auto-failover, scale size up instantly. For dev, use smallest cache that has IOPS limit.
  • Can use it from anywhere, is public.
  • Portal: New > Redis Cache

Azure Automation - Runbooks

  • VM with manually installed cache like Couchbase.
  • Azure Automation is a service on Azure, a hosted run space, backed by PowerShell Workflow.
  • System Center Orchestrator is very similar.
  • Centralized Store: values, credentials, PS modules.
  • API for Management coming.
  • Reporting and history.
  • Automation accounts are tied to regions.
  • Might want to split accounts for security/access to production credentials.
  • Assets stored items are shared across the account.
  • Runbook: a set of scripts to execute, becomes a 'job'.
  • Schedules: daily, hourly, once.
  • Priced on job run time and number of runbooks, per account.
  • For a call to be made to a HTTPS service (external), need to download and install the certificate since there is not access to a root cert store.
  • By default you get the PS module for Azure.
  • Assets: click import module and browse to a zip file containing the modules.
  • Can now pass a PSCredential object to authenticate with Azure AD using an "organizational account".
  • Can edit the runbook within the portal on the Author 'tab'.

workflow Show-SimpleExample
{
  $adminCred = Get-AutomationPSCredential -Name "NameOfCred"
  Add-AzureAccount -Credential $adminCred
  Select-AzureSubscription -SubscriptionName "MSDN Subscription"
  Get-AzureService | select servicename
}

  • Add-AzureAccount is what's used to authenticate, must be stored statically in the PS runspace/runscope/process.

Game Services

  • Membership, Leaderboards, Achievements, Downloadable Content, Game Statistics, Game Presence, Cheating & Banning, Multi-Player Game Stats.
  • Tricky to choose the right services, especially as they're changing all the time and getting new capabilities.
  • Telemetry, Inquiries, Commands, Notifications
  • Uses Service Bus pub/sub topics to get telemetry into worker roles for processing.
  • Uses Relay Service to get notifications to other drivers (presumably hosting WCF in game) but says that it's not actually very scalable, use something else.
  • Storage, uses tables for lap times and telemetry, blobs for binary lap replay.
  • Uses ASP.NET MVC website(s) for lap time display, telemetry API for inquiries and website itself.
  • PartitionKey, partitions have SLA, and order by RowKey
  • Telemetry: sending per sector, once per 10 seconds, sampling data 100ms and batching to send 1/second (interpolation on receive to smooth jumps).


Halo Game Backend Services

  • Service Bus and a worker process to digest statistics, user stats.
  • Authentication, XBox sits on secure network tunneling over the internet.
  • Security gateways, XBox secure Protocol, UDP based, have to use SG to talk to public internet.
  • "Title infrastructure"
  • XBox has limited local storage slots and RAM, need to offload temp data, partial statistics.
  • The keep session state and sessions marked as complete, so crashed servers can resume session state reload.
  • Massive scale testing, Azure Service Bus team had to ask Halo team to stop!
  • Scale testing had to be seriously invested in, no standard tools, record, mutate and playback. Record the real traffic. Hard to fake generate certain data types.
  • Like a server side version of fiddler.
  • Use scheduled service bus messages, dump millions of messages but scheduled for delivery.


Building Big: Lessons Learned from Customers

  • James Hamilton, lessons learned from building Windows Live,
  • Partitioning your application.
  • Optimising for density.
  • Caching

  • Millions of users, 200,000++ ops per second, 1000s of cores, 100s of databases.
  • Redundancy and Fault Recovery
  • Commodity hardware slice.
  • Single version software.
  • Multi-tenancy.
  • Support geo-distribution.
  • Automatic provisioning and installation.
  • Configuration and code as a unit.
  • Manage roles, not servers.
  • Deal with multi-system failures.
  • Recover at the service level.

  • Stateless is the goal.
  • Small code optimizations can have massive impact on your cloud bill.

  • Typical Workloads
  • Content Delivery: websites and services, session state, transient state, shopping cart.
  • Content Exploration: Per-user content view, per user-stateful progress, doesn't touch other user data, fairly simple to scale.
  • Social Graph and Content: comments, likes, global reach between users, loosely consistent, async updates to n customers, I must see my comment immediately but its okay for it to take a short time for others to see it.
  • Interactive Gaming: n user content view, game actions, session, global reach, state updates shared to n players.

  • Capacity, adding for demand, partitioning scheme.
  • Optimize, resource usage, efficiency
  • Shift, trade durability, queryability, consistency for throughput, latency.
  • Play to strengths of components available.

  • Azure compute, fairly easy to scale up and out
  • Azure storage, 100TB, 5000iops per partition, 3Gbps, normally hit iops limit first, more partitions or more accounts.
  • Azure SQL Database, 150GB, 305 threads, 400 concurrent reqs, hard to partition because the query semantic doesn't account for partitions/cost of operation.

  • Horizontal partitioning, shards, split by rows, needs balanced part key.
  • Vertical partitioning, split by columns, can be done across storage types easily on the cloud.
  • Hybrid, shard + dimension data on other storage mediums.
    • Select part value, Last Name, must consider field that won't change.
    • Convert to part key, like hash it, speed vs. collisions vs. distro, mod by bucket count
    • Map key to logical partition.
    • Map logical partition to physical partition.
    • End up with a connection string.

  • Range Based, ranges adjusted to even out the parts.
  • Logical Buckets, assign to logical bucket and assign to physical store, can have more than one logical per physical.
  • Lookup assignment, lookup table to physical resource.

  • Twitter, two tiers of people, normal people with 300 followers, celebrities.

  • Querying over shards, gather and query, query is done in data tier.
  • Eventual consistency can be done, geo scale with local write that the writer customer can see, then background task write elsewhere, or pop on queue.
  • Submit queries to all nodes manually, gather results.
  • SQL Azure Federations, does sharding for you and live splits, works for some problems, the central gateway becomes the choke point.

  • Consider rush hour in a region, consider using region in quiet region.

Caching

  • Memcached clients are aware of servers and keys.
  • Windows Azure cache knows Azure, cache is deployed as a worker role.
  • Partitioning is driven by server, has high avail option and perf monitor counters.
  • Can add instances, auto handles it, but cannot remove easily.
  • Dual write so reliable with small overhead. Does your app care, need cache hits?

Part 2

  • The importance of designing for insight, instrumentation, performance and reliability.
  • Design for failure, part of the system being offline, ignore or queue, retry, backlog.
  • Putting trace or logging config in a config file won't work in the cloud, need to design a remote config system.
  • There is a good chance of long periods, minutes, or downtimes per month, and be within SLA.
  • Deal with it.
  • More components, more chance of something being down.
  • Hiccups, retry a few times, then mark as down.
  • Node down, service down, entire region hit by act of God.
  • CloudFX library, has retry policy, then throws a transient.
  • RETRIES MUST HAVE RANDOM DELAY
  • Retries should be coordinated with other retires stacking up, only one call retrying and the others either queuing or failing completely without even trying.
  • Semaphore around the retry resource, object.

  • Load needs to be spread over regions.
  • Route away from failures.
  • Press Association deployed to 8 datacentres.
  • Traffic Manager has route poor performance, get closest DC by IP, but routed when bad.
  • Location is not the same as IP latency, use IP latency.
  • Traffic Manager has custom health probing in SDK.
  • Queues duped in different regions, processors local, sucking from local and dupe queues.

  • How quickly should I react to new insight?
  • Do I know the question or am I exploring data?
  • KPI, time series, scalar stat, trending, ratios.
  • How much data is required to gain insight?
  • Perf stats against app stats, like total users, active users.
  • How much of the source signal do I need for insight?
  • Local computation vs. global system computation?
  • Requests queued is your most important metric.
  • New Relic works on Azure by agent.
  • OpsTera
  • PagerDuty
  • WAD Windows Azure Diagnostics
  • WAD has challenges, won't give 3rd party diag, perf data is written to table storage with 60 time based partition key, and so IOPS is bottlenecked when monitoring many servers, have to turn down the sampling.
  • Queue based means alerts can be slow to propagate.
  • Stores are not very queryable, table store!

Table Store

  • Stores performance counter and application log data.
  • General max through is 1000 entities per partition per table per account.
  • Same cap on the out.
  • Split data by history and realtime, push to a logging service that splits.
  • High value: filter, aggregate, publish to anything written is actionable; alerts, dashboards, operational intelligence.
  • High volume: batch, partition, archive; trends, root cause, mining.
  • WAD is very configurable; verbose written to file and then forwarded to blob storage. Blob storage can sustain this sort of load up to 1000 instances per storage account.
  • Keep storage accounts separate for instrumentation data.
  • Create a custom data source in WAD, monitoring a folder, if I put the file here, you put the file there.
  • Log4Net: Rolling files is all you need, do all async writes.

// logging and retry with CloudFX
try
{
Stopwatch stopWatch = null;
if (TimeMethods && !String.IsNullOrEmpty(methodName))
{
stopWatch = Stopwatch.StartNew();
}

using (var connection = new ReliableSqlConnection(connectionString, _policy))
{
using (DbPolicyObserver reporter = ConfigureDaPolicyObserver(methodName))
{
connection.Open();
ret = func(connection);
}
}

if (stopWatch != null)
{
stopWatch.Stop();
Logger.TraceApi(String.Format("{0}.{1}", ComponentName, methodName), stopWatch.Elapsed);
}        
}
catch(Exception e)
{
Logger.Warning(e, "Error in...");
throw;
}


Best Practices on MS SQL Server on Azure Virtual Machines

  • Affinity Groups, under settings; group resources as objects that work together, Azure provisions them to work together.
  • Availability Sets ensure resources do not get shut down together (updates, outages per rack).
  • Virtual Network; vNet and subnets and DNS servers, your playground, can have same addressing space because they're separate. Hard to change later/impossible.
  • Can link to premises via perm VPN via hardware, or can put replicated AD or new AD and trust, can also point-site via Windows client.
  • Can set AD server as DNS server for your vLan, though VMs must be DHCP assigned by Azure, though lease is like infinite.
  • Can configure network infra in Azure via XML files.
  • Don't put TempDB on local Azure disk anymore, Azure practices change fast.

Deployment/Licensing

  • SQL Server Gallery Images have licensing implications; for Windows Server, your license is inclusive of time up. For MSSQL, this is the same.
  • License mobility lets you move on-prem license to Azure, so use a vanilla Windows gallery image and load on.
  • Can upload a VHD, even use SysPrep.
  • Backup to cloud (from on-prem):

CREATE CREDENTIAL myCred
WITH
IDENTITY = 'TechEd-Creds',
SECRET=''


BACKUP DATABASE [ReportingServerScale]
TO
WITH
CREDENTIAL = 'myCredential',
NOFORMAT,
NOINIT,
NAME = N'TechEd Demo',
SKIP,
NOREWIND,
NOUNLOAD,
STATS = 10
GO

Recommendation for MSSQL on VM

  • Remove unused endpoints on the VM.
  • Use virtual networks instead of public RDP ports to administer your VMs.
  • Use VPN tunnel to connect to database servers.
  • Carefully plan virtual networks to avoid re-configuration; have to tear down and rebuild everything if the network needs resizing.
  • Use Availability Sets and Affinity Groups with VMs.
  • Use mixed mode authentication when not in a domain; Windows mode is default but not always best idea.
  • Add new port endpoint and add load balancing to it via the portal.
  • Not sure if balancer is aware of downed node.
  • Make sure Windows Update times are staggered to avoid downtime, even if in same Availability Group.
  • Enable database connection encryption, not default.
  • Run ALTER SERVICE MASTER KEY REGENERATE because gallery uses same image.



Service Bus

Note: Azure Queues part of Azure Storage services, also exist and are more feature limited.


  • Queues, part of Azure Messaging services.
  • Topics, pub/sub event aggregator
  • Relays
  • Notifications

  • With Azure queues, if the content of the message is not XML-safe, then it must be Base64 encoded. If you Base64-encode the message, the user payload can be up to 48 KB, instead of 64 KB.

  • Each message is comprised of a header and a body. Cannot exceed 256 KB.
  • Max concurrent TCP connections to a single queue 100 shared between senders and receivers, limit not imposed using REST.
  • Queue size between 1 and 80 GB.
  • Azure queues and Service Bus queues: 2,000 msg/s with 1KB.
  • Azure queues: 10ms latency with no nagling.
  • SB queues: 20-25ms.

  • For decoupling, load leveling, scale out.
  • Topics allow for:
  • Broadcast and partition
  • Content based routing
  • Messaging Patterns

SDK 1.8

  • Message Lock Renewal, for slow processing.
  • Entity queries, in C# and REST, see code example below.
  • Forward Messages between entities, trees of queues composed together for supporting 1000s topics, topic forwards to 100 topics, each forwards to 100 etc.
  • Batch APIs
  • Browse sessions
  • Updating entities, enable/disable
  • ConnectionString config file key based setup supported.

Notification Hub Preview

  • Scalable, cross platform, push notification.

SDK 2.0

  • Shared Access Secrets (SAS key), namespace and entity level, via C# or Azure portal, regen/revoke keys.
  • Auto-delete Idle Entities, clean up idle topic, idle sub clients auto clean, good for auto scale down cleaning up subs not used, or test debris.
  • Event-Driven Model, to remove hardship of writing correct receive loop, now SDK can have observers for receive, exception.
  • Tasked-based Async API
  • Browsing Messages

SDK 2.1

  • AMQP, JP Morgan standardised messaging protocol.
  • Paired Namespaces

// How to set SAS rule on an entity

QueueDescription qd = new QueueDescription(qPath);
sendRule = new SharedAccessAuthorizationRule(
"ruleName",
SharedAccessAuthorizationRule.GenerateRandomKey(),
new [] { AccessRights.Send });

qd.Authorization.Add(sendRule);
namespaceManager.CreateQueue(qd);

// How to connect to a queue using SAS

Uri runtimeUri = ServiceBusEnvironment.CreateServiceUri("sb", serviceNamespace, string.Empty);
MessagingFactory mf = MessagingFactory.Create(
runtimeUri,
TokenProvider.CreateSharedAccessSignatureTokenProvider(keyName, key));
QueueClient sendClient = mf.CreateQueueClient(qPath);

  • Max 12 rules per entity.
  • Before this, needed to use "Users" or AD federation.


// Entity Query API

IEnumerable queueList = nameSpaceManager
.GetQueues("messageCount Gt 10");

IEnumerable topicList = nameSpaceManager
.GetTopics("startswith(path, 'foo') eq true AND AccessedAt Lt '" + startTime + "'");

IEnumerable subscriptionList = nameSpaceManager
.GetSubscriptions(topicName, "messageCount Gt 0 AND AccessedAt Lt '" + startTime + "'");



// Looks like OData, so can use Linq?

  • For querying when you have many queues, topics etc.
  • Use case: filter for unused queues.


// Message Browse - peeking

QueueClient queueClient = QueueClient.Create("myQ");
queueClient..Peek(); // does not lock the message.
queueClient.Peek(fromSequenceNumber: 4); // specific starting point.
queueClient.PeekBatch(messageCount: 10); // supports batching.


// Asynchronous API

queueClient.SendAsync(currentOrder);


AMQP

  • Efficient, binary.
  • Reliable, fire forget, exactly once delivery
  • Portable data reppresentation
  • Flexible, client-client, client-broker, broker-broker
  • Broker-model independent


Table Storage

  • See also Blobs, Drives, Azure Queues, Files.
  • Primary and secondary access keys (also now supports direct REST access)
  • Data items called 'entities'
  • Fixed PartitionKey, RowKey and Timestamp properties
  • 252 additional properties of any name, schemaless.
  • PK and RK form clustered index.
  • AtomPub REST and .NET APIs

[DataServiceKey("PartitionKey", "RowKey")]
 public class Movie {
     /// Movie Category is the partition key public string PartitionKey { get; set; }
     /// Movie Title is the row key public string RowKey { get; set; }
     public DateTime Timestamp { get; set; }
     public int ReleaseYear { get; set; }
     public double Rating { get; set; }
     public string Language { get; set; }
     public bool Favorite { get; set; }


  • .NET uses the concept of a context and changes are made to the context and saved, can thus be batched/transaction. Similar entity change tracking to EF.
  • Null values are ignored by storage engine.

  • Queries are begun using context.CreateQuery and look like EF Linq queries.
  • Scanning  a part or range of parts done using .CompareTo("Key") >= 0

where fooEntity.PartitionKey == partionKey
    && fooEntity.RowKey.CompareTo(lowerBoundRowKey) >= 0
    && fooEntity.RowKey.CompareTo(upperBoundRowKey) <= 0

where
   fooEntity.PartitionKey.CompareTo(lowerBoundPartKey) >= 0
   && fooEntity.PartitionKey.CompareTo(upperBoundPartKey) <= 0


Tips

  • Use a new context for each op, context object is not thread safe.
  • Can use IgnoreResourceNotFoundException and use null return to avoid exception overhead on empty lookup 404.

Performance


  • Scans depend on row size, not just rows in partition, rows in where set.
  • Research whether best to run a single query spanning range of parts, vs. running concurrent queries on each part?
  • Partitions served from single server.
  • Avoid hot partitions, unbalanced schemes.
  • See "Lessons Learned" above for tips on shard key mapping algos.
  • Row size: 1MB
  • 200TB per table
  • 1,000 rows per query response, use continuation token, no snapshot consistency.
  • 500TB per storage account.
  • 20,000 entities or messages/second per account.
  • 10Gbit/s in 20 out for geo redundant, 20 in 30 out for local redundant.
  • 2,000 entities/second per partition.

Labels: , ,

Note to Self: The Windows Store GridView Control, Deconstructed

Tuesday, July 09, 2013 / Posted by Luke Puplett / comments (0)

Here’s a diagram of the GridView control for Windows Store apps, deconstructed into its components. This is intended to show all the templates and styles in various places to help customise this complex control.

Download here.

Labels: , , , ,

Converting ASP.NET WebForms to ASP.NET MVC 4.0

Monday, April 22, 2013 / Posted by Luke Puplett / comments (2)

This is a blog-in-progress while I try to convert an ASP.NET WebForms application to MVC 4. It may completely fail or I may give up, but I thought it might help to share my experiences.

What I am Migrating

It’s a one project ASP.NET WebForms 3.5 site. It’s pretty simple, uses the old Ext JavaScript framework, which became Sencha UI, I think. There’s a fair few pages but not a lot of HTML in each, since its built in XSLT (vomit) from XML coming from an XML database. Much business logic is in the data-layer (vomit II).

Strategy

My bright idea is not to convert. I don’t think that’s the easiest route, I just don’t know what’s needed for an MVC app, and I want the IDE to be in MVC mode, with the context menu support for views and stuff, which probably won’t happen if I just add some DLL references and setup some routing.

So, I will make a new, empty MVC 4 app and copy in the files from the old world. I know MVC is happy to serve-up ASPX forms pages and controls, and that’s all a WebForms site is – just some ASPX pages and some handlers, maybe some URL rewriting.

Start

So far, I have:

  • Created an empty, new ASP.NET MVC 4.0 project.
  • Set the same project references and NuGet packages.
  • Set my solution and project configurations for staging/QA.
  • Copied over all the stuff from the old Web.config that I think are non-standard, i.e. added to support the old app. I already did this, so its hard to blog at detail but its actually pretty simple.
  • Begun to copy over the basic, high-in-the-dependency-graph controls and pages.

Copying Stuff Across

I have copied /MasterPages and its children, /Classes which are just some .cs files with helpers inside, /Controls which are Web User Controls or ASCX files as well as the default.aspx (all come with their code-behind and designer).

Problem 1 – Solved

In copying the files, drag and drop, from the WebForms project in the same solution, the IDs of the controls on the ‘pages’ (in the ASPX or ASCX files) are not being ‘seen’ in the code-behind. By that, I mean there are red squigglies in the C# wherever they are referenced – its like the controls on the pages are not being compiled.

I reconstructed a control manually, by adding a new one with a different name and copying over the important mark-up and code. This was fine, so MVC is cool with it, just doesn’t like it being copied file by file.

So I figured that it must be related to the designer file. The file doesn’t sit at the same level in the Solution Explorer as the manually created good one, so there’s something odd going on. Opening the designer.cs file is fine but the code doesn’t respond to mouse-overs – its lifeless like a text file.

Solution: The trick is to delete the file and then right-click its parent AS?X file and hit Convert to Web Application which forces regeneration of the designer.cs.

You can copy a load in and then convert at the folder or project level, too, donchaknow.

Problem 2 – Solved

The default route and getting default.aspx to be the page shown at the domain root. This one is easy, although I’m not sure its the proper way. Simple add this route.

routes.MapPageRoute("HomePage", "", "~/default.aspx");

Problem 3 – Solved

Settings in httpHandlers not working, i.e. still going via the routing system. So this site has a load of magic setup in the web.config to make friendly-URLs happen. Of course, this needs to be re-considered in an MVC world, but we’re talking about things like blah.xml which invokes a special handler – its all custom stuff for this old site.

The solution was two step:

- Add the following line to not route requests:

routes.IgnoreRoute("{resource}.xml");

- Also need to update the types in the httpHandlers section in web.config

<add verb="*" path="*.xml" type="Company.XmlHandler, SiteDllFile" />

- To

<add verb="*" path="*.xml" type="Company.XmlHandler, NewMvcSiteDllFile" />

Problem 4

The form values security and validation seems to have been tightened-up in ASP.NET 4.0 or something, because I was getting an exception when reading Form values containing XML fragments. This was remedied with this config setting:

<httpRuntime requestValidationMode="2.0"/>

Problem 5 – At this stage, there has been no problem 4

With everything else copied over and some shared components refactored out into a shared library, everything else is working.

Labels: , ,

Data says Git is officially the world's most woeful piece of software

Monday, April 15, 2013 / Posted by Luke Puplett / comments (3)

When computer programmers have a problem, they turn to StackOverflow. The site has a great feature to vote-up a question, so rather than ask the same question, you can say "Me too" by casting a vote.

So what then is the software with the highest voted questions?

Overwhelmingly, Git.



Problems with Git are responsible for 1 in 5 of the top-voted questions on StackOverflow, which is really saying something when it is such a small tool compared to say, an entire language.

So next time you're having problems with Git and someone tells you its not Git's fault, don't blame the tool, you can point out that it actually really is the most woeful programming software in existence today.

There's a lot of hate for Git but it also has a very active and noisy tribe of supporters, that get very defensive when people dare to criticize Git. Of course, the problem is that criticism and confronting problems is the first step towards making improvements, which banishes Git to an ugly status quo.

Labels:

Ensuring that two PowerShell scripts don't run at the same time

Tuesday, January 22, 2013 / Posted by Luke Puplett / comments (7)

This quick PowerShell snippet shows how you can ensure only one instance of a script or section of a script executes at a time on a system, i.e. a server running scheduled tasks.

The script will wait while the other script or section completes. The Dispose method releases the mutex and allows any other scripts to take it and run. It should ideally be in a finally block to ensure it always gets released, although I've read that it uses a .NET critical finalizer to ensure release but I don't know if this works as well in PowerShell as it would in a proper .NET process.

    [System.Threading.Mutex]$mutant;
    try
    {
        # Obtain a system mutex that prevents more than one deployment taking place at the same time.
        [bool]$wasCreated = $false;
        $mutant = New-Object System.Threading.Mutex($true, "MyMutex", [ref] $wasCreated);        
        if (!$wasCreated)
        {            
            $mutant.WaitOne();
        }

        ### Do Work ###
    }
    finally
    {       
        $mutant.ReleaseMutex(); 
        $mutant.Dispose();
    }

Remote PowerShell like SSH

Thursday, December 13, 2012 / Posted by Luke Puplett / comments (2)

Here's a super quick howto for using PowerShell like you Linux dudes use SSH to remotely console into a server.

First, make sure Windows Remote Management is setup on the target server. So RDP onto the box and open a command prompt. Run this:

winrm quickconfig

Now that's setup, close RDP and on your client admin dev type box, open PowerShell 2.0

Run the following commands.

$domainAdmin = Get-Credential
# Enter your domain admin or other privileged credentials in the box that pops up.
Enter-PSSession -ComputerName web3-pool2-ln -Credential $domainAdmin

After a few seconds the prompt should change and you're in. Use 'exit' to come out.

Labels: , ,

Facebook Login for Windows Phone Apps

Friday, May 18, 2012 / Posted by Luke Puplett / comments (4)

The brief: allow new customers to sign up and sign in with their Facebook account, because they have this option on the website, so they’re going to need it in the phone app.
This is a high-level blog post about enabling Facebook login in a Windows Phone application. Once you’ve configured your app at and got your client/consumer ID and secret from developers.facebook.com, the process to actually authenticate your users is very simple - it’s the bigger picture that’s more difficult and so this blog post aims to prepare you rather than give you a few code snippets for what is essentially just extracting some tokens from a string.
So, with that said, these are the things to consider before writing any code:

  • How does OAuth work?
  • How will a Facebook account map to my accounts?
  • How does this affect my current secure authentication?
  • How will the login screen work on the phone?
  • What happens when the app has been slept for a long time?

How does OAuth work

Facebook uses a version of OAuth. Personally, I like learning specifications and writing my own code, rather than learning a framework or SDK. Usually, the spec is more clearly documented than other people’s SDKs.
Facebook has its own documentation covering how to authenticate, which I strongly advise you to read. The OAuth specification will give you a wider understanding, it’s pretty simple, but I must point out that Facebook doesn’t stick to the spec.
I’m going to explain the process in a nutshell here, but before I do that, consider that your application must register with Facebook and obtain a Consumer ID and Secret which will identify your app to Facebook. The aim is to get hold of an Access Token, which is a short-lived ticket that represents your rights to act on behalf of a Facebook customer.
  • There are a few types of authentication, depending on whether you’re building web apps or mobile/desktop/GUI apps.
  • For a web app, your server redirects the user to the Facebook OAuth sign-in page and passes across your Consumer ID so Facebook knows it’s issuing an Access Token for your app.
  • When doing so, you pass Facebook a URL to redirect back to, after the user signs in.
  • Your server ‘waits’ for the redirect and then extracts a temporary token from the redirect which it used to fetch the proper Access Token directly from Facebook, using an HTTP get.
  • For a client app with a UI, it’s much simpler.
  • Place a Web Browser control on a page and hook-up the Navigated event.
  • You automate the Web Browser control to navigate to the Facebook OAuth sign-in page, the user then logs in.
  • Facebook then redirects to a page you specify (at a domain you preconfigure with Facebook) and in the URL’s fragment portion, is the Access Token, as this occurs, the Navigated event fires a few times.
  • Inspect the Uri at each point to see if it has the Access Token or an error code. As soon as you have the token, you can progress the UI to the next stage.
  • You’re looking for access_token=xyz and expires_in=123 (seconds) parameters in the Fragment portion of the URI, it’s simply a case of parsing the string.

Why use Facebook to authenticate your customers?

Essentially, there are only a couple of reasons. The first is integration: you’d like your app to connect to Facebook and programmatically post to your customer’s feed or see who their friends are, perhaps. The second is to reduce sign-up friction and provide a better experience to the on-boarding process. This may be simply to remove the need for a customer to remember another password (and providing a way to reset forgotten passwords) or because your sign-up process asks a bunch of questions that you could actually just pull from their Facebook profile.
In the latter case, you’ll likely have your own customer entities in a database that will need to be linked to a Facebook account.

Mapping the Facebook account to your own accounts

If you’re retrofitting Facebook login to an existing app, then you’ve probably already got your own login process, so you’re going to need to offer a login screen supporting the old username and password sign-in as well as the new OAuth method.
As mentioned above, you might need to have a process for reading some details from Facebook and creating an account entity in your own system, and you may even wish to offer a way for users of the old sign-in scheme to connect their accounts.
I won’t go into detail about how to do this linkage, but whichever way you choose to accomplish it, you’ll need to ensure that another Facebook OAuth application cannot simply log-in to your app by just sending a Facebook ID to your login system.
Your server-side system should require the Access Token and user’s Facebook ID, and then use the Access Token against the Facebook Graph API directly to obtain the default FB account and check the ID for the user it returns matches what you’ve been sent.
You’ll also need to prove that the Access Token and user ID have come from your app, so you’ll need to sign the data with a secret that’s shared between your servers and your app, which means obtaining/agreeing a key before the Facebook sign-up occurs.
If you don’t do this, then there’s nothing to stop another Facebook app from getting an Access Token and FB user ID and sending them to your login endpoint and masquerading as one of your customers!
There’s an inherent weakness here, in my opinion, that could be strengthened if, when your server fetches the user account using the supplied Access Token from Facebook, you supply your App Secret and FB could ensure that the Access Token was issued to your app.

How will the login screen work on the phone?

If you don’t have an existing login scheme then you only need to supply the Facebook login option, unless for privacy reasons, you’d like to allow your customers to sign-up without Facebook.
It’s safe to assume that login will take place from a dedicated page, as opposed to a popup control. The user should only be bothered by the login screen when they need to login, and that page needs to play host to a web browser.
We also need to consider sign-up, as well as sign-in; your application may need to collect extra information on sign-up, data that’s not available from Facebook, but also, your customers might not want to use Facebook.
In my scenario, I have a dedicated page and flow for non-Facebook sign-up, and a dedicated page and flow for Facebook sign-up and sign-in (combined).
The flow goes something like this:
OAuth Page Flow
The left-most page is the Home Panorama which detects a guest login and provides two menu options for logging-in and signing-up.
The top path consists of:
  • The Login Method Selector page, offering the Facebook login and a normal username + password UI.
  • Using the latter will call the normal login web service and follow the quick route back to the Home page, while selecting Facebook login, will navigate to a page with a Web Browser control.
  • This page will display the standard Facebook OAuth login screen and upon entering details, the browser control will vanish and, if the customer is signing-up, they’ll be presented with a page through which they can supply a screen-name, otherwise, if logging-in, they’re just navigated to the Home page.
  • If sign-up goes well, then a welcome message is displayed and the user is offered the option to post a Wall message or click to go straight to the Home page.
The bottom path consists of:
  • The Sign-up Method Selector page, offering Facebook and ‘Manual’.
  • The Facebook option takes the user to the top flow, whereas the manual route consists of a few more pages / UI that collects and checks all the extra data that’s needed to create an account, data that is normally taken from their FB account details.

What happens when the app has been slept for a long time?

Login credentials are persisted between app use sessions and the Home page is able to detect which login method the user used last time. For a Facebook login, the previous Facebook Access Token is verified and, if expired, the app navigates to the Facebook login page and brings up the Web Browser control.
If the Web Browser has cached login details then the browser will automatically be logged-in, without the user typing anything, and the app will navigate back to the Home page. This flow happens so quickly that it appears that the app opens at the Facebook page, looks busy for a couple of seconds and then goes to the Home page.
This flow might take some time on slow networks but Manual logins can simply authenticate without navigating anywhere and work much more smoothly.
So far, this is all fine and dandy, but in reality the Home page is not the first page of an app. An app may be entered via the back button or on resume, into a state where the user is no longer considered logged-in – the Access Token has expired or your server session has been pruned.
In my app, I use my own MVCVM pattern. I have a Controller in addition to the ViewModel. This is just a personal preference, I like to keep my ViewModels as just ‘binding and commanding surfaces’ with no logic.
Doing things this way keeps me from adding spiralling side-effect logic in property setters and coerces me to use dedicated helper and utility classes rather than be tempted to inherit too much application logic - I’ve worked on apps that reuse logic by VM subclassing and it gets ugly. I also like to build standard ‘dumb’ VMs which can be reused across the app and contain only what needs to be on the screen.
Saying that, I do use inheritance in this situation. My base PageController has a set of virtual methods which orchestrate all the initialization, one of which is called to check authentication.
Each time a page is navigated to, the PageController runs some code to ensure the user is logged-in which allows me to redirect the user to the Login page and return afterwards using the BackStack. I also check the BackStack and remove the sign-in pages so the user can’t back into them.
With this logic on each page, even if the user lets the phone go idle overnight while on a page deep within my app, the Login flow will run in the morning, as soon as the page resumes.
Of course, you don’t have to have funky Controllers and virtuals to do this, but it needs bearing in mind that authentication isn’t just something that happens on the Home screen.
Time will tell whether this page flow method works. It’s perfectly feasible to embed a Web Control in a popup UI control or inject it into the visual tree. As networks get faster (I’m looking at you, 4G), then Facebook login will become a less irksome UI dance.
Have fun, and here’s some useful links:
Facebook Authentication Documentation
http://developers.facebook.com/docs/authentication/
OAuth 2.12 – although Facebook strays from the standard in some fairly major ways.
http://tools.ietf.org/pdf/draft-ietf-oauth-v2-12.pdf
Where Facebook veres from the OAuth standard.
http://stackoverflow.com/questions/9724442/is-facebooks-oauth-2-0-authentication-a-strict-implementation-of-the-rfc

Labels: , , , , ,