Thursday, May 26, 2016

Troubleshooting the dreaded "The build directory of the test run either does not exist or access permission is required." error message in Microsoft Test Manager

Our company is starting to use the Microsoft Test Manager a lot more to manage our testing, both manual and automated. Additionally, we've recently started on creating some new Test Controllers and Test Agents with an exotic network configuration. On one of these new Test Controllers, I've started creating Lab test runs, and been getting the error message "The build directory of the test run either does not exist or access permission is required." When you Google this error message, you get what's described in this post on MSDN. In my case, this was the former problem: the account under which the test controller was running couldn't see the build folder.

After reading that last statement, you might say "well, why didn't you check to make sure that the drop folder was in the place it should have been ?". And, I did. Sort of. Due to our network configuration and aliases, my user could see it on the expected place at the alias in my portion of the network, but the user under which the Test Controller was running (not the tests themselves as configured in the Microsoft Test Manager, but instead the Test Controller software, they're not the same user) couldn't because we have some synchronization going on. Once I logged on to the machine on which the Test Controller service was running ** AS THE USER UNDER WHICH THE TEST CONTROLLER WAS RUNNING ** and went to the network alias myself, I could finally see that the synchronization between the locations on the network that had the same alias wasn't running and the build that I expected to be there was in fact not running.

Problem solved.

Monday, May 02, 2016

Migrating code in TFS from one Team Project to another Team Project within the same Team Project Collection

My company has frequently had the issue of moving code around within the same Team Project Collection as shifting desires and motivations for the organization of our code have swayed our company. Previously, we've used tools such as the TFS Integration Platform available on code plex. That project has its share of issues, namely that:
a) It was half baked to begin with
b) It's no longer maintained and doesn't work for versions later than TFS 2012.

Fortunately for me, thanks to a fortuitous Stack Overflow post, I've found the following script to recursively move the content of one folder to another folder, and it even works across Team Projects within the same collection (but not across Team Project Collections):

Param(
    [Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()][string] $SourceFolder,
    [Parameter(Mandatory = $true)][ValidateNotNullOrEmpty()][string] $TargetFolder
)

Get-TfsChildItem $SourceFolder |
    select -Skip 1 |  # skip the root dir
    foreach {
        & "C:\Program Files (x86)\Microsoft Visual Studio 12.0\Common7\IDE\TF.exe" rename $_.serveritem $_.serveritem.replace($SourceFolder, $TargetFolder)
    }

In order to run the script above, you'll need both Visual Studio and TFS Power Tools installed. When you install the Power Tools, you'll need to ensure that the PowerShell module is selected in the installation options. Also, because the path to the Power Tools PowerShell module isn't put in the PSModulesPath by default, you'll have to manually import the module into your PowerShell settings like so (before you run the script):

Import-Module "C:\Program Files (x86)\Microsoft Team Foundation Server 2013 Power Tools\Microsoft.TeamFoundation.PowerTools.PowerShell.dll"

Saturday, April 16, 2016

Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://localhost:xxxxx' is therefore not allowed access

While developing an adal.js app with OData in IISExpress, I'm getting the following error:

Response to preflight request doesn't pass access control check: No 'Access-Control-Allow-Origin' header is present on the requested resource. Origin 'https://localhost:44315' is therefore not allowed access

Turns out the problem is that I needed to include the CORS package from Microsoft:

Install-Package Microsoft.AspNet.WebApi.Cors

Once that's in there, I also need to enable CORS using the 'EnableCors' extension method on my HttpConfiguration.

Tuesday, April 12, 2016

Using ADAL.js correctly with AngularJS when setting up your endpoints

When searching around and using the tutorials on how to correctly use adal.js to authenticate your calls to Web API (or anything else) in Azure, you'll often see a block similar to this that you have to put in your App.js to configure your main module:

adalAuthenticationServiceProvider.init(
{
tenant: 'mytenant.onmicrosoft.com',
clientId: 'abc4db9b-9c54-4fdf-abcd-1234ec148319',
endpoints: {
'https://localhost:44301/api': 'https://some-app-id-uri/'
}
},
$httpProvider
);

You'll notice that this appears to be pointing to the Web API root of a service running on localhost, and you'd be right. For this to work correctly, you'll need to enable OAUTH 2 path matching in the application manifest of the **client** that's connecting to the service!

Wednesday, March 30, 2016

Running and debugging Azure WebJobs locally on your development machine

Check out this article: https://github.com/Azure/azure-webjobs-sdk/wiki/Running-Locally

It shows you how to locally run and debug Azure WebJobs, which as it turns out is extremely handy because you can interact with all the Queues, Tables, Blobs etc as you normally would and get a full debugging environment.

Updating your Azure AD Application Manifest

We've recently found that as we develop more applications in Azure, we need to put safeguards on the deployment of the applications to ensure that they're configured correctly. Part of this means editing the application manifests to ensure that certain settings are always enforced on certain applications. Here's what Microsoft has to say about updating Application manifests.

Bottom line, if you want to automate anything to do with manifests, you'll have to write your own application to use the Azure Graph API libraries to retrieve the manifest / application settings and edit them.

Sunday, February 28, 2016

Properly invoking scheduled WebJobs

Recently we've found the need to start using Scheduled Azure WebJobs. However, the examples out there are all gargbage, even in the case where you can find an actual example using a scheduled WebJob rather than a continuous WebJob. So, for the benefit of anyone interested, including future me, here's the proper way to invoke a Scheduled WebJob in the entry point of the WebJobs assembly:

    /// <summary>
    /// The main entry point to the scheduled webjobs.
    /// </summary>
    public class Program
    {
        /// <summary>
        /// Main entry point for the scheduled webjobs
        /// </summary>
        public static void Main()
        {
            IKernel kernel = new StandardKernel();

            kernel.Load(new ServicesScheduledWebJobsNinjectModule());

            var jobHostConfiguration = new JobHostConfiguration
            {
                JobActivator = new ServicesScheduledWebJobsActivator(kernel),
                DashboardConnectionString = ConfigurationManager.ConnectionStrings["AzureWebJobsDashboard"].ConnectionString,
                StorageConnectionString = ConfigurationManager.ConnectionStrings["AzureWebJobsStorage"].ConnectionString,
            };

            var host = new JobHost(jobHostConfiguration);

            // Must ensure that we call host.Start() to actually start the job host. Must do so in
            // order to ensure that all jobs we manually invoke can actually run.
            host.Start();

            // The following code will invoke all functions that have a 'NoAutomaticTriggerAttribute'
            // to indicate that they are scheduled methods.
            foreach (MethodInfo jobMethod in typeof(Functions).GetMethods().Where(m => m.GetCustomAttributes<NoAutomaticTriggerAttribute>().Any()))
            {
                try
                {
                    host.CallAsync(jobMethod).Wait();
                }
                catch (Exception ex)
                {
                    Console.Error.WriteLine("Failed to execute job method '{0}' with error: {1}", jobMethod.Name, ex);
                }
            }
        }
    }

What the above does is the following:

  • Configures the JobHost to use a dependency injection container via a custom IJobActivator implementation that, in our case, uses the Ninject dependency injection container.
  • Configures the JobHost with a custom configuration so that we can control various items, including the connection strings for the dashboard and jobs storage.
  • Starts the JobHost. This bit is important, because all the other examples out there neglect that this needs to be done.
  • Dynamically resolves all schedulable methods that should be invoked, using the NoAutomaticTriggerAttribute built in to the WebJobs SDK. This attribute is used internally by the SDK to determine which methods need to be invoked manually (i.e. on demand) rather than by a continuous invocation used by Continous WebJobs.

Continuous Delivery of Azure Web Services

See this:

https://azure.microsoft.com/en-us/documentation/articles/cloud-services-dotnet-continuous-delivery/