How Microsoft does automated testing. An interview with Klaus Hemstitch.

Microsoft HQ

I recently had a discussion with Klaus Hemstitch, a Senior Software Engineer from Microsoft.

I was curious to find out how they tackle the difficult mission of building scalable and useful automated tests.

He has been working there for the last 7 years in the Office 365 team.

Each day his team makes sure that all the web components are working as expected in all major browsers.

Started working there in 2013, before that I worked 3 years at Adobe.

Our team is responsible for the functional automated tests for most of the web components from Office 365.

The biggest challenge is definitely cross-browser testing. Making sure that everything looks perfect and works as expected in Chrome, Firefox, Safari, Opera, Internet Explorer, Edge and mobile browsers.

The official day when we’ll stop supporting Internet Explorer is August 17th, 2021.

But that doesn’t mean that we’ll stop testing on it, because we’re the first ones who should know when something no longer works.

Many enterprises are still using it and they’ll continue to use it for years.

A lot of legacy systems depend on it and there are cases where employees are not even allowed to use other browsers.

Using headless browsers is definitely a bad practice, since they tend to behave slightly different in certain scenarios compared to regular browsers.

We always use real browsers on Windows and Mac machines and mobile devices.

If we get a report about a defect in production, it would be embarrassing to say “Hold on, let me check that manually, because our scripts are using the headless version of Chrome.”.

Neither. We’re using a different solution.

If you would have asked me 5 years ago, I would have said Selenium, but things have evolved.

Building an internal test framework with Selenium means reinventing the wheel and that leads to a terrible ROI.

It’s not a secret, we’re using Endtest.

We did a thorough analysis 1 year ago, which involved criteria such as ease of use, flexibility, collaboration, cross-browser capabilities, ROI, reliability, etc.

After doing the POCs and crunching the numbers, it was pretty clear what we should use.

It’s been great so far. Seems to be the only solution out there which handles cross-browser testing natively.

You build the test and then you can run it on any browser, without adding the extra tweaks like you would in Selenium.

I also like the fact that we can use it to test emails, text messages, PDFs, API requests.

Not really.

I think that might be a misconception dating back to the old test recorders from the early 2000s.

When it comes to automated testing, you need things such as variables, if statements, loops and re-usable components.

Endtest has all of that, it’s as flexible as many scripting language.

When you pick a language or a tool, flexibility is important, but it’s not the most important thing.

It it was the most important thing, we would still be writing machine code.

I’m only including the functional tests in this answer.

In depends on several factors, on how many commits we have.

At least a few thousand every day.

I get why you’re asking me this, because Playwright is developed by Microsoft.

There are several reasons.

Playwright is a library, it gives you the bricks, but you still have to build the house. All of that “building” translates into time and resources,

We believe that developers should spend their time building the product that their company sells, not building an internal tool.

Internal testing frameworks are terrible when it comes to ROI.

ROI (Return on Investment) is important when it comes to anything.

A few years ago, if a team needed automated tests, a small number of people would decide what tool or library to use.

They wouldn’t calculate the implementation cost or anything like that, they would just go “Oh, this looks cool, let’s use that”.

That was an awful trend, it gave birth to a lot of Doctor Frankenstein Automation Engineers, building over-complicated and unreliable internal testing frameworks that were always in an “almost ready” state and that wouldn’t deliver much value.

Some of those engineers would become very defensive when it came to their work, they would twist any logic just to prevent those projects from being thrown away.

Things have changed nowadays, we’re seeing more people involved in these decisions, we’re seeing best practices and real Project Management applied to these automation endeavors.

If this concept seems difficult to grasp, think of the following example:

Which option makes more sense?

Using a video conferencing tool like Zoom or building an internal video conferencing tool from scratch by using WebRTC.

WebRTC is open source and free, but building that internal video conferencing tool will take months of work and that translates into huge expenses for your employer.

Like I said earlier, reinventing the wheel leads to a terrible ROI.

For us, it was always important, but I feel like it’s not taken seriously by a large number of companies.

I think this will change in the near future, I would expect Accessibility legislation similar to GDPR being widely adopted.

Developers need to understand what it means. Adding title attributes to your elements will make your website compatible with screen readers, but you’re ruining your Accessibility score by not testing in all major browsers.

People with visual impairment will be able to use your site, but people using Firefox, Safari or Edge will be left out because you’re only testing on Chrome.

This is the exact definition:

Accessibility is the practice of making your websites usable by as many people as possible.

Fortunately, we have the resources to avoid this trick.

We test everything, all the time.

But testing around the change is an acceptable trick for teams that do not have the resources for full regressions all the time.

It depends on the trend.

I’ve seen awful trends fueled by Marketing budgets, disguising limitations into good practices.

The stuff of nightmares.

One example of the top of my head is a company that built a product for automated testing which has such a bad architecture that it can’t even handle multiple browser tabs.

This is mostly because it relies too much on JavaScript and we all know that browser don’t like it when JavaScript wants to reach another browser tab.

They’re disguising this limitation by saying that you don’t have to test if a link actually opens a page in a new browser tab, because that would mean that you’re actually testing the browser and not the website.

They’re saying that you can just check for the target=”_blank” attribute.

By applying the same twisted logic, you could say that you don’t have to perform any clicks, you can just check for onclick events.

I’ve also seen another tool that promotes the idea of doing cross-browser by testing your website in Chrome, fetching the DOM at each step and then pasting those DOM dumps in other browsers.

I can’t understand why would anyone come up with such a hideous idea.

I hope I’ll see a trend about Accessibility in the near future.

Cross-browser testing is important, not everyone is using Chrome on a MacBook.

A browser is not just a JavaScript interpreter.

A headless browser can behave differently than a regular browser.

Chrome on Linux can behave differently than Chrome on Windows.

Test the entire workflow. If your website is sending an email when you perform a certain action, verify that the email was actually sent and that it looks as expected.

If that email has a button, make sure you click on it and check where it takes you.

Stay up-to-date with the latest tools, because reinventing the wheel will you look like a fool.

Developer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store