I changed jobs just over a year ago and an awesome part of my new Employer’s approach to staff development is that they send members of the team to conferences and training.
I had the privilege of attending a boot camp for cloud architects in Bellevue, run by Microsoft for their Partners and also had a day in between travelling from Scotland to Seattle to have a look at the city with my colleagues.
We basically got lost after visiting Pike Place Market, we thought we were heading towards lake Union but hadn’t read the map quite correctly. So we looked around and spotted the Space Needle fairly nearby (it’s a bit of a spottable land mark) and headed towards that.
The majority view was that we didn’t want to spend the money on going up the Space Needle so we went next door in to MoPOP Museum of Pop Culture. As I discovered this has a fairly significant connection to Microsoft as I saw one exhibit after another from the Paul G. Allen collection and it slowly dawned that a founder of the museum was also co-founder of Microsoft.
This place is amazing; it starts with the swoopy architecture which has a monorail bursting through it. Then the inside is all modern clean lines with doors and stairs leading to themed exhibits.
There are closed off exhibitions behind doors that cover elements like films or people or open areas that open out to the full height of the museum.
I failed a Microsoft Exam last Friday – yes it’s true, on occasion I fail an exam. One (amongst the many) fantastic attitude at my current employer is that a Microsoft exam fail is part of the journey of discovery. A couple of my new colleagues also remark that any significant score over the “pass” mark is a waste of study time and I can kind of see where that comes from.
If you’ve booked exams for the last few years you will have been informed of the latest on retake policy which has been tweaked and firmed up to an extent to give candidates a proper chance between resits and not to try to brute force the attempts. At the time of writing the Exam retake policy states:
If a candidate does not achieve a passing score on an exam the first time, the candidate must wait at least 24 hours before retaking the exam.
This time I was particularly keen to book my resit as soon as possible, the practicalities of availability in Edinburgh means that I was expecting to have to wait a couple of weeks at least for availability so I didn’t expect the 24 hours to be a problem. I went through the exam details page, clicked “Schedule Exam”, confirmed my details and the link accounts page and got redirected back to the same page with a light yellow banner “50055: This exam is not currently offered. Please select another exam.”
So I tried a few different ways without success; inprivate, different devices and all gave the same error. I tried telephoning to be told that I would have to wait 24 hours to book. So I waited 24 hours after the end of my exam and still couldn’t book.
I was finally able to rebook through the Pearson Vue site at 18:30 on the Monday after my exam on the Friday; the exam was scheduled to end at 12:30 (My times are BST). The half hour seems more than coincidental and the take away is that the systems will prevent the booking taking place until at least a number of hours have passed on business days.
I don’t fail exams often and I certainly don’t plan to, and hopefully you don’t either. So when the unthinkable happens don’t panic and take time to regroup and make plans. And wait a day and a bit before you try to rebook!
Interesting one today – standing up a Cosmos DB to record the output of a CycleCloud job run which happened to be written in C++ and started getting “Failed to read item”. Data Explorer stopped showing the results from the item when browsing.
Issue was that our new id had been delimited with slashes and Cosmos DB didn’t like it. If you get “Failed to read item” when clicking through then you might have a character in your document Id that Cosmos doesn’t like.
There are some awesome folks out there who share their hard efforts so the rest of us can have an easier job. A few of these that have been really useful sit around work against the REST APIs of key Azure services.
My days of day in day out development are over so I find a lot of my automation “glue” mashing up deployments relies on PowerShell with the odd bit of CLI. Most is a little bit of scaffolding to deploy ARM templates but occasionally a requirement to work with the data plane of a resource appears and I have to resort to manual config.
ARM Template support for configuring resources is always improving but due to timing this isn’t always possible. Sometimes it is really helpful to understand what is going on, and sometimes the only option is REST.
For the latter I thoroughly recommend POSTMAN if you need to interact, though Azure is also improving native API exploring support. I discovered POSTMAN through an azure Friday video with Steven Lindsay who has some really really useful modules on GitHub. This is really helpful for CosmosDB (Documentdb as it was) and really helped me debug some Gremlin issues.
Next is the PowerShell module for CosmosDB which sits over REST and as well as being an awesome example of the kind is also a really helpful module for checking interactions with CosmosDB.
Kubernetes and AKS in particular is becoming more and more important to us at work. In our experimental facility we have to stand up varying compute platforms; my main project is examining a specific workload on HPC and part of it needs Kubernetes to support some supporting work.
Then I stumbled across a blog by Chris Johnson . I’ve met Chris (officially a “good guy”) exactly twice in Person; once in 2010 in Berlin at an Ignite Session (when Ignite was a smaller scale effort) for SharePoint 2010 where he presented a session on Microsoft Certified Master, and secondly at Ignite in Orlando last year when I made a point of catching him before he presented a session of the Microsoft Cloud Show with Andrew Connell (also officially a “good guy”) and Julia White (yes, that Julia White).
Anyway, this is one of those posts which is as much for my benefit as yours!
Working in the Microsoft cloud ecosystem (ok, Azure) and working for a Microsoft Partner steers me heavily towards the tools that the vendor provides. This works on a number of levels; mainly around depth of knowledge and personally this means getting ready for the next exam.
For code and script storage this means Azure DevOps and GitHub, the choice has got harder lately due to the tweak to the “free” tier on GitHub and private repos but we all love Azure DevOps because of pipelines and all the other stuff, even though my primary day to day use is as a Git Repo.
Of course I’ve been using Visual Studio for years and the online version for as long as it exists. The rebrand to Azure DevOps also brought a new url option going from <org>.visualstudio.com to dev.azure.com/<org> and the latter has created some new joy. I really recommend Multi-Factor authentication and love using the latest and greatest tech from Microsoft including their security features as it’s about the only way to keep up with the threats we face out there on the internet.
Of course it comes back to bite you from time to time and this morning has been a classic case. The current Git for Windows Release is 2.21.0 but a key component for me as a multi-factor protected user of Azure AD and Azure DevOps is the Git Credential Manager for Windows and there are a bunch of fixes relating to the new dev.azure.com url in version 1.19. Git for Windows 2.21.0 unfortunately includes Git Credential Manager for Windows 188.8.131.52 so you’ll need to install in strict order to get this the correct way round.
My symptoms included the following:
No prompt for credentials when cloning my repo, just a couple of http errors then a prompt for a password.
No prompt for credentials even though I had removed the pat tokens and emptied Windows Credential Manager.
Errors thrown at the Git level (I tend to live in VS Code or Visual Studio).
One of the (many) great aspects of my current role working in the Innovations area of a UK Bank is a relentless introduction to new features in Microsoft Azure. At my stage with Azure in practice and exams it is usually a new feature or behaviour that has dropped as part of a generation 2 (E.g. Storage vs Data Lake) or evolution of features or more subtly a change to the defaults of a combination (e.g. Automation and Desired State Configuration Extension). Then there are the “never heard of it” moments when a term gets mentioned and I rattle straight to a search engine.
One of these a few months back was Azure Cyclecloud, one of our projects involved input from Microsoft and their HPC specialist proposed it as a key component of the platform being evaluated. In our case it is acting as an orchestrator / scheduler and keeping tabs on a handful of low priority virtual machine scale sets.
I’ve not had any direct exposure to HPC beyond awareness due to Microsoft architectural exams I’ve done in the past for on-premises Windows, and latterly Azure cloud. The good news is within parameters that the news is good and Azure CycleCloud appears straightforward and being predominantly IaaS based is fairly easy to secure within our patterns. My thoughts so far are:
The web admin interface is fairly sensitive to environment – I’ve lost about a day to Internet Explorer (doesn’t work) and the reverse proxy on our firewall appliances mangling page scripts.
The manual install is straightforward and reliable in my limited experience – we have a vnet model that it sits in quite nicely and the documentation is good on required ports and cluster communications.
Azure Cyclecloud being HPC and batch etc comes from open source land, so lots of command line and Linux – quite ironic that my career includes so many loops (my first job at an accountants in the 1980s included being the guy who wrote sql reports using vi on a unix practice management system).
Following on from the previous point, Azure Cyclecloud integrates with Active Directory and therefore has it’s own RBAC model – very important to understand if you are trying to secure it.
I have a few concerns about the quickstart deploy, mainly due to the public ip address bound to a server but that probably reflects our use cases and my background. (Googling “cyclecloud initial setup” reinforces this concern as a number of servers in initial setup pop up).
The cloud account relies on a fairly big service principal so it’s important to keep on top of that bearing in mind the last two points.
About 50% of the time I get the name wrong and call it Azure CloudCycle. This hit rate is slowly improving.
Where do I start? Migrating this blog over the weekend has led to a bit of a review and the realisation that a lot of the blog posts relate to my journey preparing for and sitting (and generally passing!) Microsoft Certified Professional (MCP) exams.
The last exam related post on this blog is Passed 70-631 WSS Configuring Today which was posted just under 10 years ago – yikes. I’m delighted to see that posts in the meantime related to Motorcycling and Off Road Skills so that would indicate some wider interests other than work.
In the spirit of the web I thought I would share a little more detail on how I migrated the content from my existing platform to new. This blog started in 2004 on a platform called subtext and with my dev skills fading in to the background and the original platform not being maintained anymore I was looking for something easier to run with (and a more current platform).
It has been a long time running but this is the second (more visible) post in the process of moving my blog from a hosted .net site running in SubText to something that is a bit more modern (yeah, 3 years behind the curve again – just look when I created this).
I’m going to do some checks of content etc but the plan will be to be a bit more active on to a platform that has a future.
The unfortunate side effect is that the comments haven’t come across – the api in the very old version of SubText that I was using didn’t have commenting. I’ve got a backup of the full original database but I’m not optimistic that posterity will bring back the comments (all over 10 years old now).
I need to get the domain switched over – what fun!