Well Architected in the time of COVID

Tom Henry, Account Manager at Ubertas Consulting, considers the impact of COVID and our ongoing mission to assist customers through the AWS Well Architected Framework.
The Well-Architected Framework Programme has provided great support to AWS customers since its inception and has proved to be a valuable method of helping to optimise and run their workloads in the Cloud. The framework focuses on all aspects of an AWS deployment to ensure scalable, secure, performant and cost efficient operation in the cloud.

In the 12 months prior to lockdown Ubertas Consulting conducted an average of 1 Well-Architected Review a week, each one providing an opportunity to assist a customer on their AWS journey. We pride ourselves on both our technical expertise but also on our approach to building trust through the process, often ensuring that out consultants take the time to be on-site with each and every customer.

Lockdown, and the requirement for home working poses the problem of physical distance and during the COVID-19 pandemic, like many other companies around the world we have had to adjust our approach to how we deal with the ‘normal’ day-to-day goings on of the business and that doesn’t make any allowances for how we have had to approach the process of the Well-Architected Framework and in particular the review and remediation stages which require close cooperation with the customer.

We’ve moved on from on-site workload reviews and meeting our clients face-to-face, to building a virtual rapport, carrying out entirely remote reviews and remediation planning, using Slack channels to keep communications flowing whilst in and around the remediation process.

A few of the changes we have made are as follows:
· Running the review remotely using AWS Chime
· Remote/read only access to the workloads environment to assist with preparation
· Slack channel for communication between the prep/review/remediation stages

A view from our Technical Director, Andy Hammond:

“When it comes to carrying out a Well-Architected review for a customer, stakeholders can be reluctant to give too much away. It’s our challenge to reassure them that we’re here to help. In a more restrictive working environment, it can be a real challenge to remain effective at the kind of clear communication required to build trust. During lockdown we’ve adopted new practices in order to keep the review session engaging and impactful.

Two important guidelines we have across our teams:

“Take the virtual wall down” We always have our camera on as we feel that it’s easier to communicate effectively. When we’re face-to-face we are able to more easily convey a demeanour that’s relaxed and welcoming.

“Take it steady” Rather than peppering the customer with 10 questions in the first 30 minutes, we also try to ease more gently into the review. The framework consists of 46 questions over 5 pillars and it’s easy to forget that although there are areas that must be covered within a given timeframe, the process can be intense for our customers. Many customers will have never been asked questions in this way before, and we must try to make it as natural as possible.”

And a customers view:

Having your first ever Well-Architected review is a slightly daunting challenge but Ubertas Consulting were very accommodating. I wanted to be in control of my account at all times and Ubertas were happy to let me drive the remediation actions with their guidance with the consultant never showing any sign of frustration. The consultant was also keen to ensure that I was able to understand the actions that had been taken and could replicate them in future if required. A positive experience all round and I would be keen to do this again. The biggest problem is having the discipline to not go for 2-3 hrs straight in an online meeting.”

– William Ngufor, Cloud Architect at OmPrompt

The most important message we convey with all our customers is that Well-Architected is in place to help and is one method customers can use to identify areas to work on in a fast moving technology space; it isn’t an audit or a stamp of approval.

See more About our Well-Architected Framework Program

– Tom Henry, Account Manager at Ubertas Consulting.

DevOps Blog: What is DevOps?

What is DevOps?
DevOps culture is described by strengthened collaboration, shared responsibility, improving quality from valued feedback and increasing automation.

DevOps is a blend between Developers and Operations teams, they follow a set of processes and tools that help them to create faster and more stable applications.

The role of a Developer is to create applications that are cutting edge and easy to use. The Operations team is tasked with keeping the application as stable as possible.

What goals do DevOps hope to achieve?
Their goal is to vastly improve the workflow to satisfy the following:

  • Deployment frequency
  • Achieve quicker release times
  • Lower failure rate/ bugs of new releases
  • Shorten downtime between fixes

Before DevOps, the process used was called  Waterfall and it was very different to how things are done now. Fast forward to today and it the process is known as CICD.

What is Waterfall?
Waterfall was a process where the applications used to be fully developed before being released. When the application was released, there was no solutions for fixing the bugs!

What is CICD?
CICD stands for continuous integrations and continuous development. This allows the applications to be released before being fully developed. Developers upload their code to a CI server where the code is checked to see if it compatible with the current code and ensure there will not be any clashes.

Why is DevOps important?

Building is when the developers are building the desired application for the target audience.

Testing is when the application goes through a rigorous testing process to check it is suitable to release to end users.

The application is released after it has gone through the testing phase and it will be perceived to be suitable for end user usage.

Here you should monitor how your application is performing and also when receiving feedback from target audience, list out what the issues are and discuss the plan on how to fix them.

With all issues gathered from the feedback, now you can address the issues, recode and test again before releasing. The diagram below shows the CICD workflow.

DevOps Blog: Infrastructure as code

One of the first things about Cloud computing that really fired up my imagination was the concept of infrastructure as code. The phrase instantly conjured up the merging of two worlds in a very exciting way. As I looked into it more I realised that this concept was clearly one of the cornerstones of the public  Cloud (and of DevOps as well but that is probably best left for another blog post).

This all sounds wonderful but what does it actually mean? To understand this it is easiest to consider the two parts of the term separately. In a traditional IT environment the infrastructure consists of the servers and storage that runs the applications a business uses on a daily basis as well as the networking components that plumb everything together. The code is the software that actually makes up these applications written in a programming language. The infrastructure would be physical but the code would be stored digitally and importantly could be backed up, copied and different versions could be maintained.

Virtual Infrastructure 

In the world of the cloud however the hardware infrastructure components are (from the user’s point of view anyway) virtual. If you need a new server you just log into your account and with a few clicks of your mouse you can have it up and running. The concept of infrastructure as code takes this a step further however and allows the virtual, cloud based infrastructure to be described textually in a way very reminiscent of computer code. This textual description or template can then be used to request the infrastructure.

This has many remarkable and powerful implications. It means that your infrastructure is now reproducible at the click of a button which is great for producing test and development environments that are identical to a given production environment. It removed a large degree of human error from the process. Your infrastructure is also documented in a single location and the templates can be stored, backed up and version controlled in the same way that computer code can be.

How to Achieve It 

To give a concrete example, Amazon Web Services (AWS) <https://aws.amazon.com/> has a service called Cloudformation <https://aws.amazon.com/cloudformation/> which allows you to describe your infrastructure using a language called JSON <https://en.wikipedia.org/wiki/JSON> (JavaScript Object Notation). You then upload this template and the AWS Cloudformation service works out the dependencies between the various infrastructure components specified and requests that they are created. In AWS terms, this set of infrastructure is referred to as a stack. If you need to modify the stack then you can just make changes in the template and then Cloudformation can work out what has changed and apply them for you.

Infrastructure as code really is a concept that sounds simple and innocuous at first but the more you think about the more you realise how much potential it has to transform the way you work.

Cloud is about how you do computing, not where you do computing.

DevOps Blog: Cross Region Stack Management

Using StackSet For Cross Region Stack Management

AWS CloudFormation StackSets allows you to create, update, or delete CloudFormation stacks across multiple accounts and regions with a single operation.

This article will focus on how to deploy the same CloudFormation stack in multiple regions using AWS StackSets.


Create S3 buckets in two different regions regions using StackSets.


Note: Use you account id when it asks for AdministratorAccountId

Note: Please be aware that this template grants Administrator access and so you might want to modify it to be more restrictive

  • Once the two roles are created we can begin working on StackSets
  • First we have to create and save the yaml file, we can call it s3.yaml


  • Go to the CloudFormation page and click on StackSets in the left tab
  • Select Create Stackset then Upload a template file of s3.yaml and click next
  • Put in the StackSet name and description and click next
  • Select self service permissions and select AWSCloudFormationStackSetAdministrationRole for IAM admin role ARN the IAM execution role name should be AWSCloudFormationStackSetExecutionRole
  • Under Account numbers put in your account ID and under Specify regions put in the regions you’d like the StackSets to be run in and then submit
  • Once the Stackset is created, select Stack Instance and the status should say OUTDATED but the status reason should say User Initiated, this means that the stack instance is getting configured. After a couple of minutes the status should change to Current and you can go to the Cloudformation pages in the regions you specified and you see that a new cloudformation stack has been created in those regions.


Written by
Fayomi Fashanu: Senior AWS Solutions Architect at Ubertas Consulting

DevOps Blog: DMS: An Introduction (Part 1)

DMS: An Introduction (Part 1)

Anatomy and how it works

The Database Migration Service (DMS) is an AWS service that enables us to migrate vast amounts of data from our source databases either as a one-time load or without ever incurring downtime via continuous replication.

Over the years DMS has continuously evolved to support a wide range of engines and also the capability to undertake migrations where the source and destination engine are different (heterogeneous).

DMS Supports the following source/destination engines


  • Oracle
  • MySQL
  • Microsoft SQL Server
  • MariaDB
  • MongoDB
  • Db2 LUW
  • SAP
  • PostgreSQL
  • Amazon Aurora (MySQL & PostgreSQL)


  • Oracle
  • MySQL
  • Microsoft SQL Server
  • Amazon Aurora
  • Amazon Redshift
  • Amazon S3
  • Amazon DynamoDB




Below are the key structural components of DMS. Naturally we start off with our endpoints, which when created must be defined as either source or target.

Along with this we simply configure our authentication information and optional connection attributes.

We can modify connection attributes in order to override particular settings within the DMS agent’s session, depending on your source/target database.

By default, DMS loads the tables in alphabetical order which isn’t always desirable depending on your database structure.

At Ubertas Consulting, we regularly encounter relationships between relational database tables which contain foreign key constraints and this can cause errors due to the way that DMS agents loads the tables. Due to this, we often update the connection attributes within the target endpoint to include the following:



This ensures that we don’t received false-positive errors during the full-load.

Either side of our endpoints we have our source and target databases. Your source and destination databases don’t specifically need to be hosted within AWS, but must be supported by the DMS agent which runs on the replication instance.

Replication Instances

Next up we have our replication instances, which represent how we are actually charged for using DMS. AWS don’t charge for tasks or endpoints. The only costs that should be associated with carrying out a migration project outside of your replication instances would be the following services:

  • CloudWatch — keep an eye on excessive storage charges if you ever need to enable severe logging within your tasks. This modification will result in SQL statements and other verbose information being sent to your CloudWatch log groups/streams.
  • Data Transfer — depending on the location of your source/target databases in relation to your replication instances, charges for the ingress/egress of data can result in charges per GB.

Our replication instances are backed by AWS EC2 and much of the configuration is abstracted away from us.

The configuration options are limited to a subset of instance types (listed below), disk size (limited to gp2 volume type), VPC/Subnets and whether the instance publicly accessible. We can also make our replication instance support Multi-AZ, which means that it is deployed in a highly-available state so that in the event of an outage it can failover and prevent your critical migrations from being disrupted.

DMS Replication Instance Types:

  • dms.t2.micro ~ dms.t2.large
  • dms.c4.large ~ dms.c4.4xlarge
  • dms.r4.large ~ dms.r4.8xlarge

Migration Tasks

Finally we have DMS Migration Tasks, which run on our replication instances and represent what exactly it is we are migrating and how we’re doing it. AWS recommend that we break our migration into multiple Migration Tasks, and this is also evident in the service limits which are imposed upon us.

We highly recommend that you spend plenty of time analysing your source database and planning Migration Tasks before diving into your migration. For example, if you have particularly large tables with large-object columns then we recommend creating separate tasks for these. This will allow your other tasks to progress faster without being blocked.

It’s especially important to split out your migration into multiple tasks so that should something go wrong, you are able to respond to failures with more agility.

Service limits:

  • Replication Instances: 20
  • Migration Tasks: 200
  • Endpoints: 100

Console UI

AWS recently updated the design of the DMS in March 2020, here are some screenshots (April 2020). The main difference is that the previous sections listed below are now better organised into tabs.


Within this section we can view the metadata of our Migration Task, and there’s a helpful link to our CloudWatch logs. Logging within your DMS Migration Tasks is optional, but very useful for debugging if you ever experience connection or permission issues.

We can also view our task settings as JSON, which is intentionally there for you to view within the console because this is how we update our Migration Tasks. As of April 2020, we are still significantly limited to update we can make within the console and must the AWS CLI and pass JSON when the Migration Task is not in a running state.


Table statistics

The ability to view the status and progress of each table is in our opinion the most useful components within the Migration Task console view.

Below we can see there are 2 tables within the same schema named “auditing” — both have carried out an initial full-load, row validation has been completed and ongoing changes are now being captured using the source database’s binary logs.


CloudWatch metrics

Within the CloudWatch metrics section we are able to easily monitor a particular Migration Task metrics without having to go to CloudWatch directly.

However, we can still create custom dashboards within CloudWatch, for example if we would like to created aggregated views in order to summarise the overall progress of our database migration.

For a detailed explanation of the different CloudWatch metrics to DMS Migration Tasks, you can check out this AWS documentation page:



Mapping rules

This is an area that we eluded to in an earlier section of this article. Within DMS, we are able to use a strategy that involves breaking up our migration project into multiple Migration Tasks. We can update our table mapping either by using the Console UI, or via JSON.

As well as specifying particular schemas within the database to exclude/include, we can also apply filters to tables if for example we have large tables and want to migrate our data in smaller chunks.

For more information on Mapping rules, you can visit the relevant AWS documentation page here:


Within this article, we explained the basic anatomy of DMS and how it works.

In part 2, we will dive deeper into how to optimise your Migration Tasks for speed and agility.

Andy Hammond
Written by
Andy Hammond: Tech Lead at Ubertas Consulting