Name of the test: __, Responsibilities: __; Date of the test: __, Outcome of the test: __; etc.
The last step in agile test plan is the QA team’s functions and tasks. In this section, the functions and tasks of developers and testers are listed. As far as agile testing is considered, everyone must converse and come together to make sure the quality is preserved while maintaining the process of the software programming.
DTAP encourages the creation of queues, whereas Agile software development encourages removal of queues.
Mickael Ruau's insight:
Imagine a team that can reliably deploy a single feature to a live environment without disrupting it. They develop new features on a development-environment. The team integrates work through version-control (e.g. Git). Features are developed on a ‘develop’-branch in their version-control system. When a feature is done, it is committed to a ‘master’-branch in their version-control system. A build server picks up the commit, builds the entire codebase from scratch and runs all the automated tests it can find. When there are no issues, the build is packaged for deployment. A deployment server picks up the package, connects to the webserver/webfarm, creates a snapshot for rapid rollback and runs the deployment. The webfarm is updated one server at a time. If a problem occurs, the entire deployment is aborted and rolled back. The deployment takes place in such a manner that active users don’t experience disruptions. After a deployment, a number of automated smoke tests are run to verify that critical components are still functioning and performing. Various telemetry sensors monitor the application throughout the deployment to notify the team as soon as something breaks down.
It became very apparent the model wasn’t working; we were doing it wrong. We were not the first team to recognize the problem. There were services before us, like Bing, that saw this. And we started observing the best practices of some of the companies born in the cloud How our approach to Quality changed in the Cloud cadence? We did three big things.
We redefined quality ownership, we fixed the accountability.
We understood that in order to ship frequently, Master branch must always remain as healthy as the release branch. We defined a core principle – Master is always shippable. The principle touches everything – source code management, code practices, build etc. From testing perspective, we pushed two things: shift-left testing (i.e. greater emphasis on unit testing) and eliminating flaky tests.
We also understood there is no place like production. This is the shift-right part of the strategy. it’s a set of practices about both safeguarding the production as well as ensuring quality in production.
In other words, we pushed testing left, we pushed testing right and got rid of most of the testing in the middle. This is a departure from the past where most of the testing that was happening in the middle – integration style testing in the lab. The rest of the document describes #1 in a little bit more detail.
Mickael Ruau's insight:
The Org Change for Quality Ownership
We did ‘combined engineering’ – a term used at Microsoft to indicate merging of responsibilities for dev and test in a single engineering role. It is not just an organization change where you bring the Dev and Test teams together. It is an actual discipline merge, with single Engineer role that has qualifications and responsibilities of the SDE and SDET disciplines of the past.
Everyone has a new role, everyone needs to learn new skills. This is a very important point. When we talk about combined engineering, a common question we get is how we trained the former testers. We had to train both ways. A former developer had to learn how to be a good tester and a former tester had to pick up good design skills. Managers had to learn how to manage end to end feature delivery.
In this model, there is no handoff to another person or team for testing. Each engineer owns E2E quality of the feature they build – from unit testing, to integration testing, to performance testing, to deployment, to monitoring live site and so on. Partnership with other engineers is still valued, even more so. There is now greater emphasis on peer reviews, design reviews, code reviews, test reviews etc. But the accountability for delivering a high quality feature is not diluted across multiple disciplines.
This was a big cultural shift across the company. This change happened first in one org, but then over a few years, every team across Microsoft moved to this model. There are some variations to this model but at this point there are no separate dev and test teams at Microsoft. They are just engineering teams with the combined engineer roles.
Alan, Ken Johnston, and Bj Rollison recently published How We Test Software at Microsoft(448 pages, ISBN: 9780735624252) in Microsoft Press's Best Practices series for developers. The authors have also created a website devoted to the book: HWTSAM (Information and discussion on the MS Press release "How We Test Software at Microsoft”), where you can read reviews, review the book’s table of contents, and see pictures of the book (in the wild indeed!). And Alan’s guest-post contains a lengthy excerpt from the book. Enjoy!
Cet article a été écrit et est paru dans le livre du CFTL "Les tests en agile". La qualité de la mise en plage est par conséquent nettement meilleur sur le livre. Néanmoins, le contenu reste le même
Mickael Ruau's insight:
Voici un schéma permettant d’appréhender la différence d’échelle entre les mises en service avec le cycle en V et avec des méthodes agiles :
Agile testing covers two specific business perspectives: on the one hand, it offers the ability to critique the product, thereby minimizing the impact of defects’ being delivered to the user. On the other, it supports iterative development by providing quick feedback within a continuous integration process. None of these factors can come into a play if the system does not allow for simple system/component/unit-level testing. This means that agile programs, which sustain testability through every design decision, will enable the enterprise to achieve shorter runway for business and architectural epics. DFT helps reduce the impact of large system scope, and affords agile teams the luxury of working with something that is more manageable. That is why the role of a System Architect is so important in agile at scale, but it also reflects a different motivation: instead of defining a Big Design Up-front (BDUF), agile architect helps to establish and sustain an effective product development flow by assuring the assets that are developed are of high quality and needn’t be revisited. This reduces the cost of delay in development because in a system that is designed for testability, all jobs require less time.
System Architect Role and DFT With respect to designing for testability, the system architect can play a primarily role in DFT:
Choice of technologies In addition to their main purpose, software libraries, frameworks, repositories and services should also support testability. For example, technologies that support the inversion of control may be useful, not only in terms of designing a flexible system, but also in relation to testability.
Implementation Decisions For instance, having too much logic at DB side (a common problem of many-many products) makes testing virtually impossible. So does the unreasonable usage of asynchronous message queues within the system. Design conventions Design patterns like façade, gateway, or observer foster testability. And yet, such a common thing like using a web service proxy class may not. A good convention is to use some “abstract” interface, which would interact with the proxy, thereby allowing the substitution of one with a mock object when necessary.
Approach to creating fake objects and mocks What tools and approaches should be applied to create stubs, mocks and “spies” in the code, which would support unit- and component testing?
Logging and dumps In large systems, often some system-level tests fail but all unit tests pass. Thus, it may difficult to diagnose the root cause of the problem without a good logging approach and the ability to retrieve thorough memory (or protocol) dumps.
Flexible configuration …This allows the simple deployment to the test environment and easy linking of test data sources and external mock objects through simply updating the configuration files.
Table 1. Aspects of the system architect’s role in fostering system testability.
Microsoft CEO Satya Nadella is preaching a more nimble approach to building software as part of the the company's transformation.
Mickael Ruau's insight:
Following a July 10 memo in which he promised to “develop leaner business processes,” Mr. Nadella told Bloomberg Thursday that it makes more sense to have developers test and fix bugs instead of a separate team of testers to build cloud software. Such an approach, a departure from the company’s traditional practice of dividing engineering teams comprised of program managers, developers and testers, would make Microsoft more efficient, enabling it to cut costs while building software faster, experts say. Read more ...
Oracles tell us if a test is returning the right answer. Some tests need simple oracles, but problems with many possible answers need combinatorial oracles.
I’ve been meaning to post this for ages. So while I’m polishing the rough edges on my Part 2 of 2 post, I thought I’d take this opportunity to finally make good on that promise. Here it is: the Quality Tree Software, Inc. Test Heuristics Cheat Sheet, formerly only available by taking one of our testing classes.
Scrum methodology comes as a solution for executing such a complicated task. It helps the development team to focus on all aspects of the product like quality, performance, usability and so on.
Testers do following activities during the various stages of Scrum-
Sprint Planning
In sprint planning, a tester should pick a user-story from the product backlog that should be tested.
As a tester, he/she should decide how many hours (Effort Estimation) it should take to finish testing for each of selected user stories.
As a tester, he/she must know what sprint goals are.
As a tester, contribute to the prioritizing process
Sprint
Support developers in unit testing
Test user-story when completed. Test execution is performed in a lab where both tester and developer work hand in hand. Defect are logged in Defect Management tool which are tracked on a daily basis. Defects can be conferred and analyzed during the scrum meeting. Defects are retested as soon as it is resolved and deployed for testing
As a tester, he/she attends all daily standup meeting to speak up
As a tester, he/ she can bring any backlog item that cannot be completed in the current sprint and put to the next sprint
Tester is responsible for developing automation scripts. He schedules automation testing with Continuous Integration (CI) system. Automation receives the importance due to short delivery timelines. Test Automation can be accomplished by utilizing various open source or paid tools available in the market. This proves effective in ensuring that everything that needs to be tested was covered. Sufficient Test coverage can be achieved with a close communication with the team.
Review CI automation results and send Reports to the stakeholders
Executing non-functional testing for approved user stories
Coordinate with customer and product owner to define acceptance criteria for Acceptance Tests
At the end of the sprint, the tester also does acceptance testing(UAT) in some case and confirms testing completeness for the current sprint
Sprint Retrospective
As a tester, he will figure out what went wrong and what went right in the current sprint
As a tester, he identifies lesson learned and best practices
Alice rêve de tests à ajouter dans son application quand elle aperçoit le Lapin blanc soucieux de qualité. Partie à sa poursuite, elle se trouve propulsée dans un monde ressemblant étrangement à son code, et commence à faire apparaître de nombreux tests unitaires. Pourtant, le Lapin blanc est encore insatisfait ; lesdits tests se rebellent, deviennent incontrôlables et ne veulent plus vérifier ce qu'elle veut. Comment Alice va-t-elle réussir à reprendre la main sur les tests et les faire fonctionner correctement ?
À travers les aventures d'Alice, je vais vous présenter les pièges courants du testing qui découragent souvent les débutants, mais également les bonnes pratiques et des outils pour obtenir des tests fonctionnels et efficaces.
As traditional knowledge sharing is no longer an effective way to deliver great software, the presenter has modified the mob programming concept to mob testing to improve the way teams communicate.
This innovative approach to software testing allows the whole software development team to share every piece of information early on.
Mob testing tightens loopholes in the traditional approach and tackles the painful headaches of environment setup and config issues faced by a new arrival to the team.
Think of mob testing as an evaluation process to build trust and understanding.
Know when, how, and why you should paper prototype. Tips, templates, and resources included.
Mickael Ruau's insight:
Testing & Presenting Paper Prototypes
When it comes time to show your paper prototypes to other people — whether stakeholders or testers — things get tricky. Because so much is dependent on the user’s imagination, you need to set the right context.
Designate one person as play the “computer” — A common mistake is to have the presenter also control the prototype screens, but the role of the “computer” requires full attention. To best simulate an automated system, one person’s only job should be switching the screens according to the user’s action.
Rehearse — The role of the computer is not as easy as you might think. Rehearse beforehand to iron out the kinks and prepare the “computer” for a live performance.
Follow standard usability test best practices — Tips like using a minimum of 5 users and recording the tests still apply. For advice on usability testing in general, read the free Guide to Usability Testing.
Guide the feedback. When showing someone a paper prototype, prime them by explaining the context of the design. What did you design? What have you left out for later? What elements of the design are you looking for input specifically? Generally, you’ll want to explain that structure and flow are where the person should focus.
Testing becomes part of the development process. It is not something you do at the end with a separate team. Both Ken Schwaber and I were consultants on that project.The next problem is Windows. At Agile 2013 last summer, Microsoft reported on a companywide initiative to get agile. 85% of every development dollar was spent on fixing bugs in the nonagile groups of over 20,000 developers. To fix that requires a major reorganization at Microsoft.
In this Webinar, Michael Bolton takes a hard look at the testing mission and how we go about it. As alternatives to test cases, he’ll offer ways to think about…
To get content containing either thought or leadership enter:
To get content containing both thought and leadership enter:
To get content containing the expression thought leadership enter:
You can enter several keywords and you can refine them whenever you want. Our suggestion engine uses more signals but entering a few keywords here will rapidly give you great content to curate.