It’s open enrollment time, and your plan participants have questions about which health plan they should choose or whether a new voluntary benefit might be right for them. Your benefits staff is stretched thin trying to answer questions.
What if you had a digital assistant that could not only answer basic questions, such as the differences in deductibles and copays or between a narrow network and preferred provider organization plan, but also guide employees through these decisions based on variables they provide?
That’s just one area where the latest generation of artificial intelligence (AI) tools could make an impact on employee benefits. As we mark the first anniversary of the launch of ChatGPT, experts in the benefits field are gaining a greater understanding of how these advanced tools could be used.
What’s Changed?
AI is not new and neither is the application of AI to employee benefits or human resources. Employers have been using AI to help them evaluate resumes and select job candidates for several years. Benefit plans have used decision-support tools to help plan participants choose their benefits based on certain factors.
But the technology has vastly improved, experts say. “In a nutshell, the technology has finally caught up with what we need it to do,” said Carrie Cherveny, a regional chief compliance officer with Hub International.
“It’s more accessible. You can generate new content, you can have these humanlike conversations, and it can save us a lot of time. There’s a lot of potential there with these new generative AI tools to add some real efficiencies and speed to what we’re doing,” said Susan Goldenson, vice president and director of Segal’s Innovation Lab.
What Are Some Key Applications?
Personalization
During open enrollment season, an AI platform could send out personalized reminders and help employees learn about their benefits. “An AI-powered chatbot could simplify the enrollment process, answer questions, and even recommend benefits based on individual preferences and needs and create a customized benefit package,” Goldenson said.
These new tools represent an advancement of previous decision-support tools because they can handle complex conversations and improve over time instead of having to be programmed for every scenario, she said.
Data Analysis
Employers can use AI to sift through employee data (including demographic information like age, seniority and place of residence as well as past behavior) and make recommendations for all kinds of benefits applications.
For example, an employer might want to know how many Baby Boomers, Gen-Xers, Millennials and Gen Zers it employs to formulate benefits communication strategies. Knowing how many workers represent single-parent households could better inform its paid-time-off policies. And with an increasing number of states passing benefit-related legislation, AI can help an employer quickly identify which laws affect which employees, said Cory Jorbin, also a regional compliance officer with Hub.
HR Tasks/Self-Service
A chatbot could potentially answer basic questions about benefits and be used during onboarding for new employees or for all employees throughout the year. For example, a tool that could answer questions like “What’s my paid-time-off balance” or “What’s the process for taking parental leave?” would be helpful to employees and free up time for benefits staff, Goldenson suggested.
What Are the Risks?
When it comes to analyzing data, “the data that you put into these tools needs to be in really good shape,” Goldenson said. “It needs to be reliable and credible data because the tools are learning from the data that they’re trained on.”
Caution should be used when using a tool like ChatGPT to generate a policy or to figure out how a law applies, Jorbin and Cherveny noted. Jorbin shared an example of an employer that used ChatGPT to draft a workplace injury policy. The policy it generated stated that the employer would pay for all of the employee’s medical costs, which was too vague and would have left the employer with sizable exposure if it had been used.
“If you don’t give the tool all of the different variables then it cannot respond to all of those variables,” Jorbin said. “In a vacuum, this policy worked. But it did not work in the real world.”
Cherveny cited another example of a friend who consulted with ChatGPT and received an incorrect formula for calculating how long an employee’s job must be left open while they were on qualified leave under the Family and Medical Leave Act (FMLA).
In addition, employee-specific information should never be entered in a publicly available tool to avoid issues like revealing protected health information (PHI), she said.
Experts say AI does not completely replace humans.
“You have to keep the human being involved,” Cherveny warned. “Depending on how you’re using AI for things like research, you have to check your results and verify that they’re legitimate and correct.”
“What these tools can help you with is having that digital assistant on your shoulder to help you make smarter decisions and help draw those insights and identify areas that maybe you would not have picked up on before, that maybe need more attention,” Goldenson said. “But they’re tools. So you can’t put all your trust in an AI tool. Any output is going to need human review.”
Steps to Prepare the Increasing Role of AI
Plan sponsors should first identify the problem that needs to be solved, Goldenson recommended. “Think about where you’re going to get the most value. What are those time-consuming tasks that can be done more efficiently with AI’s help,” she said.
They also need to evaluate the data—whether it’s complete as well as error and bias-free. Then they can focus on the appropriate solution, whether they want to develop their own tool or partner with a vendor and consider costs. “You need to do a cost/benefit analysis. You can’t chase every technology solution, so you need to have a thoughtful process of review and evaluation,” Goldenson said.
Governance should be part of the process too. “There are a lot of amazing things the tools can do, but we’re still responsible and accountable for the output. So whatever the application, organizations should focus on governance first because the risks and concerns are very real.”
HR and benefits staff are going to need training and to be skilled in using data and AI tools, Jorbin and Cherveny said.