Measuring the Effects of Soft Skills Training
Although soft-skills training courses often get a bad rap in organizations, we also know that, in many companies, it is the soft skills that can make or break a career. Soft skills cause some employees to stand out either negatively or positively. Effective communication, resolving conflicts and the ability to foster creativity and cooperation are skills that ensure leaders get more done through people.
And yet, according to a recent study by 24×7 Learning, only 12% of learners are actually applying what they learned in the job. Typically, there are four opportunities to miss when training a workforce:
- The students could be in the wrong class (does not meet their skill needs).
- The class itself could be boring, lecture only, with no action for students.
- There is no application: Managers do not reinforce learning.
- The effects of learning are not measured, observed or reported.
Number four is what helps perpetrate the perception that training does no good for an organization. When steps 1-3 are done well, that is a huge accomplishment. The next critical step is to shout out the results. Here are some ideas as to how to measure new and/or increased use of soft skills in the workplace.
Start before training implementation. Gather existing data on issues due to soft skills. Your source could be exit interviews, annual employee survey results, employee relations complaints, survey results from supervisors, (or direct reports), pre-training assessments, 360 assessment results, performance management data, the number of internal promotions, or the number of conflicts escalated to Human Resources. For example, what if your search yields complaints in exit interviews about a lack of advancement opportunities. The retention of a certain type of hard to find employee is low, raising costs of recruiting and holding the business back in key areas.
To get managers to perform excellent development planning in the workplace, let’s search a little further. To determine our baseline data, look for evidence that managers are currently designing learning plans for direct reports and audit them to see if they are well written, meaningful and effective. Next, ask some targeted questions of the students themselves: Are they learning enough to feel competent? Are they learning from training or learning by making mistakes (the latter is usually more costly.)? Other great questions could be about the strength of succession plans for critical positions, and the rate of internal promotions.
According to DDI’s study (Development Dimensions International) New hires placed into leadership roles come with a failure rate of 22%, while internally promoted hires only fail 11% of the time. Internal expertise and cultural acclimation make a big difference.1
Even if you cannot get a close estimate of the costs to the business as to turnover and poor moral, having a business situation get better, especially if this helps to advance a business priority, increases the credibility of organizational development. Think of the how the accounting department, the quality department or the warehouse areas are about numbers and data. They must report their efforts via graphs, reports, trends and other ways to show value. Why are we so soft on training and other people-related analytics?
Actually, organizations are not really soft in these areas. Think about it. When there is a dip in profit margins or a recession or some other hit, training budgets, staff and support shrink away. When there is credibility for workforce training, and learning is seen as a strategic partner, it is not so easy to throw out learning programs.
If you are an instructor, HR professional or in the Organizational Development field, I challenge you to proactively think about the learning programs you support. If the CFO came to you and asked what type of benefit the business is getting from learning programs, will you be able to report that the results of training exceed the costs of training?
In the four-level guide published in 1954 by Donald Kirkpatrick, organizations were finally able to make transparent the effectiveness of training.
|Level 1||Did students like the training?|
|Level 2||Did student learn the subjects being taught?|
|Level 3||Did students use the new skills and knowledge in the job?|
|Level 4||Did the training make a bottom line impact to the business?|
Unfortunately, many organizations still only measure to level one, and ask for feedback about the class. But make no mistake, the top decision makers really want to know if training is worth it, and hope to hear that business issues related to a gap in skills and knowledge have been solved. The solution is to speak to the fourth level of training evaluation whenever possible and practical. Start with data, send the right students to an engaging class, have students leave class with a clear idea and plan to change behaviors, and have managers reinforce the new behaviors in the job and observe the changes. Share the accomplishments and plan for more!
Use this model as your guide to see behaviors change in the workplace!
Ask Katy Caselli for a free consultation on your organization’s workforce development needs. 919 564 6855 or KCaselli@BuildingGiants.com
- Rioux, S., & Bernthal, P. (1999). Succession management practices report. Pittsburgh, PA:Development Dimensions International