That is the end of the TMMi Assessor course.
Once your tutor has reviewed your submissions you will be issued with your certificate of completion which you can use as evidence to accompany your application for accreditation as a TMMi Assessor or TMMi Process Improver.
I hope you found it informative and useful. If you have any comments or suggestions for improvement please let us know.
This course is intended to provide students with:
- A detailed knowledge of the TMMi standard reference model, its structure and content.
- An awareness of the origins of the TMMi framework.
- The requirements and framework to undertake assessments of organisations’ test-related quality processes against the TMMi Model.
- Sufficient knowledge to take and pass the TMMi Professional examination.
The course builds up the knowledge gradually, as opposed to going straight into the fine detail. We start at a high level, then delve deeper into the model to provide a good understanding of the model and components as well as an understanding of how to interpret the model components when undertaking assessments.
- TMMi does not offer a standard approach to a change program in an organisation. There are many approaches available, such as Edward Deming’s ‘Plan – Do – Check – Act’ cycle. CMMI, however, uses a model called IDEAL which offers an extensive and practical reference standard to introduce changes. It shows what needs to be done when implementing TMMi improvements and comprises 5 key phases:
- Initiating – Establish the infrastructure for improvement to promote a a successful process
- Diagnosing – Determine the organisation’s current state and what it wants to achieve
- Establishing – Plan and specify how the chosen solution will be established
- Acting – Execute the plan
- Learning – Learn by experience and improve the skills needed to implement change
- Initiating – Establish the infrastructure for improvement to promote a a successful process
- Gartner Hype Cycle – The curve of the Gartner hype cycle is used in many areas of new process or new technology uptake. It shows there is initial enthusiasm for change which peaks before moving to a a downward trend as new processes bed-in and people realise they need to change.
- Critical success factors for TMMi implementation – If these factors are not met this will create an immediate and considerable risk for the improvement program. The key factors are all related to the initiating and diagnosing phases of the IDEAL model.
- Critical success factors for establishing improvements – There are many factors that, if taken fully into account, will help to ensure the success of a test process improvement program:
- Once improvements have been implemented and shown to be viable and working, it is important to get them into the organisational plan so they can be managed in context with other work. After a suitable time the new processes are re-evaluated to confirm they are still working and the expected benefits are being delivered.
Once improvements have been implemented and shown to be viable and working, it is important to get them into the organisational plan so they can be managed in context with other work.
There should be regular feedback to the sponsor at appropriate intervals (weekly or fortnightly) with progress reports in the standard organisational format (for example, RAG statuses and cost / effort used).
All stakeholders should continue to be be informed of milestone achievement and provided with updates regarding the new implementations so everyone understands the process.
Some provision of support and guidance should be made available to help in answering user questions and for dealing with issues. It very important that when problems are reported they are addressed and fixed quickly to maintain confidence in the new processes..
After a suitable time (for example, six months after implementation) the new processes are re-evaluated to confirm they are still working and the expected benefits are being delivered.
The curve of the Gartner hype cycle is used in many areas of new process or new technology uptake, not just for the IDEAL model.
It shows there is initial enthusiasm for change which peaks before moving to a a downward trend as new processes bed-in and people realise they need to change.
If the decision is taken that people do not accept it and revert to their their old ways then the improvement is finished.
If, however, stakeholders keep working at it (and, if need be, slightly amend processes so they fit) then practical benefits will eventually be realised.
There are numerous ways of working out the potential and actual financial benefits of implementing TMMi. Here we show one way, based on the method Experimentus uses. Below is an example of a spreadsheet used to help calculate and document the costs and benefits for a Project Improvement Plan (PIP) for real client:
For each area of improvement we estimate how many person days we believe it will take to make the improvement. We then agree with the client what the estimated average cost of a person day should be (this will include both the client’s resource costs and Experimentus’ resource costs – dependent on our level of involvement). These are used to calculate the estimated cost of implementation. Here we have estimates £1,000 per day.
We then calculate the anticipated benefit and by subtracting the cost we can estimate the net potential benefit for each of the activities.
For PIP ID 5 you can see that we have an improvement with a potential benefit of zero against it. Often we find that there are some improvements for which there are no financial benefit. The benefit they do provide is that they are foundation stones for the other changes. If these are not implemented, the benefits of the other changes would not be delivered.
These items still need to be in the costings, and as long as the overall calculation still demonstrates a sufficient benefit there is not an issue. The advantage is that by including these zero-benefit items with the positive-amount benefits the overall cost of the implementation is fully understood.
The above is an example of a potential benefits summary where there is potential value add to the business.
We have included a confidence factor (how confident are we that the benefits will be realised) which can be adjusted to each situation. The implementation factor (how much could implementation costs vary from the amount originally estimated) can be positive or negative. For this example, from experience with this particular client, we have estimated an extra 50% could be needed. The final net potential benefit is calculated by subtracting the adjusted cost from the adjusted benefit.
For this section of the course (Implementing TMMi) there are eight learning objectives:
- LO 7.1 [K2] Summarise the activities of the initiating phase of the improvement framework
- LO 7.2 [K2] Summarise the key elements of a test policy
- LO 7.3 [K2] Summarise the activities of the diagnosing phase of the improvement framework
- LO 7.4 [K2] Summarise the activities of the establishing phase of the improvement framework
- LO 7.5 [K2] Summarise the activities of the acting phase of the improvement framework
- LO 7.6 [K2] Summarise the activities of the learning phase of the improvement framework
- LO 7.7 [K1] Recognise the critical success factor for test process improvement
- LO 7.8 [K2] Explain the risks behind not considering the critical success factors
In a classroom-based situation, the syllabus for this course expects this section of the training to take approximately 90 minutes.
- The TMMi Reference Model is intended to be generic enough to apply to all test life cycle activities and delivery models but specific enough to cover all anticipated activities and deliverables. However, the model needs to be interpreted by the assessment team depending on the organisational context
- Assessments uncover the maturity of the test process and will determine if the organisation has met a specific level of maturity. The result of the assessment will help prescribe what improvements are required and an action plan for the improvements can be defined.
- Assessment Criteria Quick Comparison
Type Assessment Team Lead Team Size Evidence Capability Rating Informal Experienced assessor At least one One type of evidence required e.g. interview
- No rating against TMMi is produced.
- Used for a quick check assessment to gain a rough understanding to various areas maturity and improvement opportunities
Formal Accredited Lead Assessor
At least two – an accredited Lead Assessor and at least One other Accredited Assessor
- Staff interviews and document study required.
- Other types of evidence, such as questionnaires, are recommended
- Verifiable benchmark rating of the organisation against TMMi is produced.
- Strengths and weaknesses can be identified in detail, including full gap analysis
- The diagram below illustrates a typical high-level structure of an assessment process. The main documentary outputs are shown on the right of the related processes.
The table below shows the relationship between the ordinal scale and nominal ratings and to which process components they are applicable
Ordinal Scale Ratings
Ordinal scale applicable toDescriptionPercentage Achievement
Percentage achievement applicable toF
Maturity LevelsFully Achieved>85% to 100%
Generic PracticesLLargely Achieved>50% and <=85%PPartially Achieved>15% and <=50%NNot Achieved0% and <=15%Nominal RatingsNominal Scale Applicable toDescriptionNAProcess AreasNot ApplicableNRAnyNot Rated
Note that Sub-Practices are not required to be scored since they are informative components.
- There may be many cases where scoring is not clear-cut. Scoring Goals is not a simple mathematical exercise, but all the circumstances have to be taken into consideration to arrive at a fair score that truly represents the position.
- When rating Process Areas the element of consensus within the assessment team is no longer applied. There is a strict rule for identifying the rating which is that the Process Area is only as strong as the weakest link. The weakest Specific or Generic Goal becomes the highest Process Area’s score.
- Scoring individual Maturity Levels is relatively simple – the Maturity Level is based on the weakest rating for the relevant Process Area.
- To achieve a Maturity level, both of the following must apply:
- All Process Areas at the particular Maturity Level must be scored as Fully Achieved or Largely Achieved.
- All lower-level Maturity Levels (and therefore all lower-level Process Areas) must be scored as either Fully Achieved or Largely Achieved.
- Data Submission Requirements (DSR) – At the end of an assessment (formal or informal) a DSR should be completed and submitted to the Foundation. The information submitted is a high-level version of the assessment results and a summary of assessor activity and effort on the assessment.