I’ve been lucky, once again, to get to James Bach‘s course, this time Rapid Software Testing Management. As with average RST, this one is also developed with Michael Bolton and Michael has a description for it. Though, of course, presenters are different and courses are not the same – they’re similar.
Overall, this course fits well with Agile, has lots of common sense and is very influential. In fact, Scrum’s standups idea was taken from Borland (as per Jim Coplien) and James told it was their project and his idea to do those. Similarly, in Borland testers worked closely with developers and practised thing XP and Scrum practitioners started promoting in late 90s.
Class was started with questions from the public – we wrote down the topics we had questions for and started digging into those. As with usual RST, James asked tricky questions and questioned our answers. He told interesting and instructive stories from his times in Apple, Borland and for now Satisfice. In addition, he had things to share from eBay, where his brother Jon Bach is a QE director.
I noticed those who didn’t attended RST course previously or weren’t active in Context-Driven/RST community, were easier to get into traps. RSTeers almost never asked directly and had questions for additional information. RST course makes you smarter. One must understand it’s a must to be passionate on problem solving, be an independent and open-minded thinker to ask tricky questions and have courage to act to succeed with RST.
Below are notes I made for myself. This is a mix of my learnings and (most part) course materials.
In James’ opinion there is no such thing as quality metric. There are metrics relative to quality, but you can’t measure quality. Any number is only meaningful based on a model. Number of bugs don’t show us anything – they nature does, what is the model behind the number?
Low number of bugs may mean bad testing, unreported bugs, lots of other things but not quality. Here we were given an example on Shuttle program, which had low bug amount… which might come from low tester to developer ratio. They only lost two shuttles, which is a good metric for engineering effort (sarcasm).
We can also take air industry as example – they don’t measure percentage of people died or percentage of air crash risk. Industry works to be the safest. This is qualitative assessment, not quantitative. Qualitative assessment uses descriptive categories instead of numbers:
- good and can be better in bla bla way
- pretty good
- bad, because of bla bla bla
Each air crash has history, conditions and impact. We need to take same info from bugs we have. Numbers won’t show this. Quality levels are set in evolving conversation at bug triage meetings. These are essential and are the place where qualitative assessment is done and quality levels are set. There you decide if reported issue is a problem or not, should one worry about it or not, if it needs to be fixed or not.
It’s a must have to study metrology as a science to learn what can be measured under what circumstances, what can’t be measured and how some numbers are easy to fake or be misunderstood. One should also acknowledge good metrics might become a goal and focus people on wrong things.
We were also advised to read Perfect Software: And Other Illusions about Testing by Gerald M. Weinberg
Test reports push to a certain way of testing. But it’s testers who should decide how to test. Testing defines your reporting. Looking at my career – I totally agree with that one. Luckily, I was able to define my reporting on my own on most projects.
Every plan and report have a context. Different context most probably needs own reporting and planning. James says that templates are mostly evil. I wouldn’t agree with that, since some reports are similar and need (light) change and one shouldn’t start from scratch every time. I guess James’ point comes from places where one and only one template was used for everything and we shouldn’t take this one very seriously Depending on the context you might need to analyse complexity, fundamental use cases, volatility, operability, customer profile, risks, briefly refer to strategies and heuristics to be used (don’t describe those, refer to).
We got a good advice for report outline – always start with a summary, so that people won’t need to scroll. Most probably don’t care about the details, they need the result. Keep test documentation thin, save screen videos and even videos of your testing sessions. These are much more informative than written steps, which most probably miss some info that videos will have.
We had a ‘Borland to purchase a CASE tool development company from Germany‘ exercise, where we should have created a template to evaluate the quality of the company and it’s product in two hours. It’s from James’ actual experience. Our team almost matched the one James did.
Test strategy planning and management
Point is testers should define and choose test strategies. Leads/managers should provide learning and consultancy for the team, not solutions. For testers risk based analysis is a good start – start from the weakest, possibly most buggiest place. Start with most usable functionality. Apply RST, exploratory and session-based testing, focus and de-focus. In RST testing is tied to development – you communicate on daily basis with developers. Try to test what has been developer recently, share responsibility and concerns with whole development team.
We also discussed three possible test management patterns:
People based test management – done in small companies. You hire people and they are responsible for testing, you (owner) don’t worry about anything.
Artefact based test management – pattern where documents, files and other deliverables constructed by testers are the testing. Large companies temp to do that. Testing is what testers do, not what testers produce. Developers create product that we ship, we don’t ship testcases – we ship the product too. Developing is not creating files, so is testing.
Activity based test management which forked into two
Thread based test management – widely used for a long time, but given an official recognised name in 2010. Used in stressful, chaotical, often pulled in unpredicting directions projects. Testing is managed with a list of activities, where items are endless rather then finite. Items are activities – regression, stress testing, security, exploratory etc. Each day, preferably on a standup, you focus on one item and work whole day on this thread. At the end of each day you analyze your learnings and report on the thread. In order to create good final report, you need to perform all threads well. You can read more about it here.
One might wonder – I’ve done this long ago without knowing it from context driven school guys, what’ up?! And it’s true – none says they invented it. It was specifically told this community gave this pattern a name and recognition, so that testers would be protected from process people, who want everything to be run in organised described way. Giving a common used pattern a name, promoting and evolving is good way to go.
Testers don’t sign off releases – testers sign off that feature was tested enough under existing circumstances and decision making people are informed to make a decision. Testers inform and don’t decide, release managers decide.
Scheduling, estimating and resourcing
Don’t assign/hire full team at once. Estimating effort and team size is just guessing. Hire one by one until you reach comfortable workflow. Look at areas that are open to optimisation – smartly deployed tools save time without degrading quality.
Remember, that there is no testphase in a project (sprint). Testing starts now – the moment tester joins the team. Testing lasts for whole project (sprint), untill testers are removed. Preparation for testing, getting to know product & requirements, understanding business needs, setup and maintenance of environment – it’s all part of testing. In a nutshell, ‘clicking through the product‘ is not testing, but all mentioned activities grouped together are.
To give an estimate, first of all, understand what is needed in a process – establish a relationship, visibility, access, setup environments. You need to understand the structure of the project. When these are clear – you have some input on working environment and can make an estimate.
If it’s not possible to setup these and your warnings were rejected – go on record and describe you’re concerns and problems in an e-mail (no spoken agreements). Later this will save you in blame game – you’ll have arguments.
In RST you advocate with examples and credibility. You don’t tell long stories, organise long meetings, but demonstrate your point. You also should work on your credibility – ones you have respect, people will listen to you.
There is a thing called positive deviance – constructive disobedience. Often, higher standing managers or simply clients don’t see the whole picture and/or lack competence. It is you duty to disobey and do what’s good and needed. Sooner, it will pay off as you gain respect of independent, caring, thoughtful person.
You might be afraid and obey not so smart orders of your boss/client. There are examples this won’t end good as project/product might fail or you’ll be outsourced. If you continue doing stupid things – someone else will do those cheaper. It’s good to fight and create credibility, smart clients like strong people and clients learn too. You might get fired or miss the contract, but you don’t want to work in ridiculous projects too, do you? Smart, strong and independent professional will always be needed and hired.
Use safety language – avoid telling people what to do. Don’t ask ‘why‘ questions that put people in defence. Ask open questions – ‘Am I Right?’, ‘Did I understand correctly?’. Use we assumed -> we did -> we found -> we suggest construction.
Learn to sell your findings and suggestions. Read The Social Life of Information by John Seely Brown and Paul Duguid. Also read The Secrets of Consulting: A Guide to Giving and Getting Advice Successfully by Gerald M. Weinberg.
Remember that executives mostly don’t care on your opinion on how business should run. Don’t go further in your judgement than testing – there are other people who might feel offended if you’ll get on their territory, especially in front of other people. Still though, have your questions and suggestions with you in case no one ever asks those. Changing the language, you can speak how business can save money, gain client base etc – that they do understand and care about.
Improve your speech and discussion skills, improve your stand up presentation durability. Good source for improving – weekly team meetings, presentations and knowledge sharing meetups.
Regular knowledge sharing and hacking events are essential for your team. You can watch learning videos, share knowledge from books you’ve read, arrange technical exchange sessions, pair testing, offsites and swapping teams for short time.
You, as a manager of technical people, must know how and with what your team is working. So it’s not only meetings and reports, but also some time together with your team doing real testing of your products to keep in touch with skills, with the products and with the team. For example, 1 day a week Jon Bach informs one of the teams and comes to office earlier. He then penetrates the system and reviews results with the team.
It should be noted though it’s not a “grim reaper outreach program” (© James) to punish someone, but a regular keep-in-touch routine.
Every manager’s duty is to keep the team motivated and help people evolve their skills. Often, organisations want quarterly, half year or yearly goals. Goals are mostly made up in half an hour and measured in quantity, not quality. You can’t predict the future well and some of your long term goals might be affected by business changes. Emergencies might put worker into a situation when he has to chose – solve emergency problem or work on fulfilling the goal.
Sure, in case of changing focus, goals are often automatically measured as ‘meets‘. But then what’s the point in doing those, if you don’t accomplish those? Each worker goals are obvious – make good work & learn.
Take for example an expensive football player. He’s goal is to score as much goals as possible. He obviously doesn’t have ‘run 20% faster’ and ‘score 10% more goals’ or ‘create attacking technique documentation’ goals. Instead, team coach addresses each player individually and team as a whole. They are in constant conversation, it’s not a once half year activity, it’s permanent.
To properly manage a team, have weekly or bi-weekly face-to-face talks with your team members. Ask what’s worker ambition is, what he wants to achieve and how you can help him with that. Ambitions and needs might change and that’s normal, help them out. Report performance based on these results. Basically, James advices to revolt, start a positive deviance and make up official HR goals, as Jon at eBay actually does. We were also advised to read Tom DeMarco’s “Peopleware: Productive Projects and Teams”.
While Scrum doesn’t say anything about that, in real life, it is very programmer centric. During short iterations and rapid releases symmetry problem becomes an issue. It is an assumption that testing need equal time as development, and surely not more than development. Which is of course most of the times wrong.
Due to time shortage, there’s a tremendous pressure to automate everything. One of the reasons to automate testing is misunderstanding of testing and testing being a second class citizen in Scrum. You usually automate unpleasant, not interesting things that don’t bring value. Remember, you are automating simple checks, not actual exploratory and constantly learning process of what testing really is.
Testability gap is another issue as some functionality may be unreachable or mocked. One should remember, that once mocks are removed, functionality has to be re-tested. Automated checks will be at help, as previous knowledge from first sessions. Developer and testers need to remember to stay in contact and inform each other what was changed, added or removed to avoid nasty surprises.
Quite often mistake is separation of developing and testing, separate testing tasks. Being a mistake in traditional, non agile processes, this is even more dangerous (fail faster) in agile. Testing should be built in, be an implicit part of work and should be done in parallel with development, not afterwards.
One should fight back, use all credibility and possible examples to avoid these common mistakes.