Ask Me Anything on AI + HR
Some questions and answers from a session at The Shape of Work (TSOW) Slack community.
Last week I had the pleasure of engaging with the TSOW Slack community in an asynchronous session about AI+HR and other related topics. I spent considerable time putting together the answers and researching additional material to make my responses resource-rich.
Naturally, this material makes for a good article, so with the permission of the TSOW Administrators, here is a summary of the AMA session. And since last week I skipped publishing an article, this week it comes extra-loaded.
The answers have been edited for better formatting for this media, because there were a few other resources that I wanted to add, and also because -let’s be honest- Slack is not the best when it comes to editing long blocks of text.
And there were some loooong blocks on text on this one…
❓: Which areas of HR do you think AI will have the most impact on?
🤓: Hi there! AI will have a profound impact in all areas of HR, and I have covered this in a recent series of articles in my newsletter, ProductizeHR.
But if I had to choose the top 5, I would go with the ones that involve large amounts of data and decisions to make around that data. For example:
1- Recruitment & Talent Acquisition:
We have seen this grow over the past few years and now it's omnipresent. Both on the recruiter side (screening, sourcing, candidate engagement, etc) and on the candidate side (resume and cover letter builders, job matching to help you find the best fit, automation tools for networking, interview coaching & negotiation tools, etc).
2- Onboarding and training:
This is starting to grow, as many organizations transition into skills-based organizations approach to expand talent pools and retain great workers. This can take the shape of personalized/adaptive learning, simulation-based training, VR training, AI-powered coaching platforms, skills assessment and mapping, talent pipeline identification, etc
3- Performance Management:
AI can help analyze large amounts of data about performance, and provide insights both to employees and their managers, both from descriptive and predictive perspectives, and can then provide real-time feedback and coaching.
4- Compensation & Benefits:
This is already been used by companies such as Payscale.com, Salary.com, and many others to analyze market data and make recommendations for salary ranges & benefits packages, analyze flight risks, determine performance-based rewards more fairly, etc.
It can also be used to understand benefit utilization to optimize the package offering for impact on the workforce, while also keeping an eye on how you rank against what the rest of the market is offering.
5- Employee engagement and retention:
This area is closely related to productivity, performance & turnover. AI can be used to identify and address the factors that impact engagement and retention, including the suggestion of ways to improve work-life balance, recognition opportunities, etc. Leveraging people analytics can also help HR design personalized career paths that predict and anticipate trends and career events so that growth opportunities are initiated by the employer, rather than requested by the employee.
❓: In terms of HRtech, what do you see as baseline expectations from tools in AI integration over the next couple of quarters? We’re already seeing gAI being integrated as default in Recruitment tools.
🤓: Thanks for your question! I'd say all HR tech products worth considering are working on some kind of AI integration. The challenge for some would be to include AI into their platforms in a way that is unobtrusive and meaningful. There are a lot of copycats adding "AI features" which are focused mostly on the basic levels of content generation, like "Help me write".
While that's not bad in most cases, it's not so much of a delighter. Some of the areas where AI is having the most impact. however, are things like Internal Talent Marketplaces, which have an impact on all areas of Talent Acquisition, Retention, and Engagement.
Josh Bersin has a very interesting article where he identifies 3 current generations of AI vendors:
Emerging (AI Added On). Example: Most HR Applications
First Generation (AI Built In). Examples: Workday, Linkedin, Cornerstone, SAP
Second Generation (Built On AI). Examples: Eightfold, Gloat, Beamery, Seekout.
He points out that the large players (most of whom are First Generation at best) have aging architectures that are not "native" to AI, and while they are built to support a lot of requirements, their core system is "fairly rigid and brittle".
Second Generation tools have the advantage of having been built on AI, so they are much easier to integrate right now. Large players will need to do a lot of investment and rebuilding to achieve that level of flexibility.
Hope it was useful!
❓: With these fast-changing dynamics around AI, ChatGPT, and Prompt Engineering; I am curious to know "What are the potential biases that can arise when implementing AI in HR, and how can organizations mitigate them?"
🤓: This is an interesting one, and I will give you responses from 2 different perspectives. I'd say the main bias of the output would come from the information on which the generative AI was trained. The output of the generative AI reflects this, as you can see in the example here, where an AI is asked to generate images of what a professor in different disciplines looks like. Or this other example from the MIT Review. Or this one about the AI depiction of "African workers". Most of the examples are from image generation because it's much easier to observe, but obviously, this exists in every type of generative AI.
Bias is inherent to AI, just like it is to human experience. And it can be reinforced when the AI is trained repeatedly on AI-generated output, with weird results like the Model Autophagy Disorder (the people that named it worked very hard to make that acronym -respect).
Some companies like Textio are trying to position themselves as a solution to the problem of bias in AI. This article shows how they use a careful prompting approach to reduce bias.
Now, just in case you were talking more about the bias of people in using AI (instead of the bias of the AI output), the biggest challenge is when companies do not regulate the use and do not train the employees.
There are plenty of examples of bad use of AI, whether it is because the content generated is inappropriate for the situation, or because it potentially exposed private or confidential information. Here's a list of Awful AI that David Dao compiled with some of the worst use cases (living, evolving list of use cases here). Arguably, the very recent issue behind the Screen Actors Guild’s strike should be added to that list. (Black Mirror writers, stop giving ideas…)
The main recommendation would be to establish governance policies and to train team members on appropriate use. It’s not like you can really ban it. People will use it even if you do, so you might as well teach them to use it safely.
And, of course, if your company creates products, it’s always important to have solid guidance around responsible AI (take Google and Microsoft’s examples), ethics, inclusive AI, and AI design patterns.
Hope this was useful!
Q: How can we ensure that candidates get relevant feedback worth their time investment beyond an automated message? What are the best practices you follow?
🤓: Thanks for the question! In my opinion, it all starts with answering some questions before they are asked: having a strong careers page that describes life, benefits & values at your organization and also includes employee testimonials to make this initial contact more "humane". You also need to have a clear job posting that describes what is expected of candidates and also sets the right expectations about the process. Giving candidates insight into how the process is going to look goes a long way when it comes to reducing anxiety (and answering questions).
Once the candidate enters the process, there are essentially 2 kinds of feedback:
about the status of the process
about the candidate's fit (or lack thereof) with the position
Process status feedback can be easily automated, and there are many ATS that provide the ability to configure messages to be sent automatically to the candidate as they progress (or not) through the application process. Clear, predictable communications (e.g. "We're going to send you an update before the end of business hours Friday") help reduce anxiety and build trust.
Candidate-fit answers are more challenging to answer. On one hand, you want to be a nice human being and provide valuable feedback that they can use in their job search. But at the same time, you have a responsibility to protect your organization's privacy, and not to create liabilities for your employer by treating candidates differently. This can also be done with templated messages (so that there is overall consistency in the responses), but it can be useful to have more than one template to reflect different stages of / reasons for elimination.
Having a structured interview approach and using a rubric of evaluation is always a good idea so that you can document the decisions in the candidate selection, regardless of how much you decide to reveal to the candidate. In any case, the responses should be carefully written to be tactful, useful, and empathetic, while at the same time being compliant and safe answers. This article in Workable has some good advice on what to do and what not to do. Hope you find this useful!
❓: I’m curious if anyone in the community is utilizing AI for sourcing profiles and how they have benefited from it.
🤓: Hi, there! Thanks for asking.
There are a number of tools like SeekOut, Phenom, and Source (full disclosure, I am an advisor to Source) that aim to uncover hidden pools of talent by searching candidates on other platforms like GitHub or Stack Overflow. Particularly when candidates are not likely to be found or be active on LinkedIn. LinkedIn has its own Recruiter tool, which perhaps is not as polished and useful as some of the other specialized tools, but it has the benefit to be connected to the largest talent pool in the world.
With the trend towards skills-based organization, it is also important to do sourcing within your own organization, because sometimes the best talent is already in-house but in another role or department. Some tools like Eightfold and SeekOut aim to help with the sourcing of external and internal talent by leveraging AI-based profile matching and recognition. Hope this is useful!
❓: Are there any beginner-level or generic courses you would recommend for someone interested in exploring the vast world of AI?
🤓: Hello! There is no shortage of training material around AI, but one that I have found valuable (particularly because it was created by the same person that writes a very insightful newsletter called The Neuron, is this Intro to Chat GPT free course.
There's also a very popular course in Coursera by Andrew Ng called "AI for Everyone" and another by IBM called "Introduction to AI" by Rav Ahuja. And Udacity also has a free one. Hope you find these useful!
❓: What is your take on identifying authentic response/content in this AI World? CVs to Cover Letters, Assessments to Interviews, and Content to Designs, most of it is available using an AI tool. But how do we as Talent Experts identify these? Especially during assessment stages?
🤓: Ohh, this is a very interesting topic, which IMHO requires some self-reflection, thanks for asking!.
There are some tools out there that aim to detect the use of AI that can be useful for specific cases where this is an issue. Some of them are pretty simple and some of them are more powerful and complex.
However, this should also be an invitation to reflect on the validity of CVs, Cover letters, and other artifacts when evaluating candidate skills. First of all, if you intend to downgrade AI-generated content, you should say so explicitly in the application process, so that candidates can decide whether the extra effort on their end is worth it. And if you do that, you should also consider: what message does this send about your organization?
On the other hand, if you don't mind people submitting AI-generated resumes, cover letters, etc. you should also think about what are you truly evaluating by asking for a cover letter: are you just testing the resourcefulness of the candidate in learning how to prompt ChatGPT or some other AI?
My personal opinion is that all Talent Acquisition professionals ALREADY know that CVs, Cover Letters, Interviews, Live Assessments, etc. are imperfect proxies for assessing candidate fit.
CVs and cover letters are a huge simplification of the value of a candidate. They are easier to evaluate and we are all used to them. But this does not mean they are perfect in any way. At some point, as AI is becoming part of all the software being used to write resumes, asking a candidate NOT to use AI would be akin to asking them to deliver a handwritten resume, in person: a simple hurdle that aims to detect how hard the candidate is willing to work for it, rather than whether the candidate is a good fit.
Also, are you going to ban them from using AI while on the job? If not, why do it in the application process?
I personally feel that our fear of candidates using AI says more about our lack of reliable resources or our laziness when evaluating candidates than what it says about the candidates. Rather than fear it, we should embrace the use of AI, and find ways to evaluate candidates that incorporate AI.
After all, if you're using an AI-based ATS to evaluate the resume, is it fair that the candidate is not able to use AI? And if they do, what's the point of having an AI evaluating something generated by another AI?
A similar paradigm is being shattered in education: Associate Professor from Wharton, Ethan Mollick has been experimenting with incorporating AI in the classroom for the future of education. And UX Expert Jakob Nielsen, recently commented on one of my LinkedIn posts, sharing some information on the benefit of AI for entry-level employees. AI is so transformative, that it forces us to evaluate the perspectives on how we evaluate talent. Candidate evaluation methods and tools should also evolve with the times. Hope this is useful!
❓: Could you suggest any AI/ML courses that you would recommend?
🤓: Hi there! I love to share resources, so here it goes.
For those that want to go beyond the entry-level AI courses, my recommendation is to take Cassie Kozyrkov's course, "Making Friends with ML". She is great at explaining complex topics. And the "Demystifying AI" series of articles by Tech Talks. I forgot to mention that for those starting with AI, there are a number of prompting courses and cheat sheets and more here and here. Also, it's not all about ChatGPT. There are some other competitors like Google Bard, Anthropic's Claude, and Inflection's Pi that are quite powerful and some are more user-friendly than ChatGPT.
I should also bring up OpenAI's recently launched Code Interpreter, which is accessible through a ChatGPT Plus subscription. I have read a lot about it and it seems extremely powerful, although I haven't got around to committing to pay the subscription yet, so I haven't used it myself. But everything I have read indicates that it is a very powerful and flexible tool that can be used for many different purposes. Hope this is useful!
Wow, that was a long one!
Tell me in the comments, do you prefer this long format, or would it be easier if I broke long pieces like this one in two or 3 issues? I want to know, thanks in advance for your answers!