Insights Archive | Arctic Shores

Task-based Assessments from an Occupational Psychologist with over two decades of experience

Written by Arctic Shores | Apr 30, 2024 9:54:13 AM

In a change from our usual programming, we’re shaking things up this week on the TA Disruptors podcast. 

Later this week we’ll be publishing “The pragmatist’s playbook for skills-based hiring” –– featuring contributions from many members of the TA Disruptors community. In the playbook –– among many things –– we unpack the critical role of the Task-based Assessment in helping TA teams evaluate a candidate’s potential to adapt, learn, and continuously acquire new skills.  

So today, we’re diving deep into the world of Task-based Psychometric Assessments and unpacking the role they play in an effective skills-based hiring process. 

Our guide? An Occupational Psychologist with more than two decades worth working for everyone from the NHS to MI5, before spending 10 years in the Assessment Design space at Amberjack and Arctic Shores.

Join our very own Jill and Robert Newry for episode one of Season two of the TA Disruptors podcast as they discuss…

💡The difference between Question-based Assessments, Game-based Assessments, and Task-based Assessments, the challenges with each approach, and how the science varies between them 

💡An insight into the decades of neuroscientific research that sits behind the development of Task-based Assessments and how that research can be used to understand how candidates really think, learn, and interact at work 

💡How Task-based Assessments are designed to mitigate bias and manage adverse impact –– and the risks using black box algorithms within them could present 

💡The role of the Task-based Assessment in a skills-centric hiring process and where Personality and Workplace Intelligence assessments fit within one  

💡The impact of Generative AI on the world of work –– if artificial intelligence will be a billion times smarter than the average human by 2040, what traits, behaviours, and skills should we really be selecting for and how?  


Listen to the episode here 👇


Podcast Transcript:

Robert: Welcome to the TA Disruptors podcast. I'm Robert Newry, CEO and co-founder of Arctic Shores, the task-based psychometric assessment company that helps organisations uncover potential and see more in people.

We live in a time of great change and TA disruptors who survive and thrive will be the ones who learn to adapt and iterate and to help them on that journey. In this podcast, I am speaking with some of the best thought leaders and pioneers who are leading that change.

In this episode, I am very excited to welcome another expert within the Arctic Shores team, the wonderful Jill Summers. And Jill is an occupational psychologist, starting her career with NHS Scotland. 

Working in not just an occupational psychology perspective, but also within a clinical team, from which you then moved on to Gattenby Sanderson and well into the professional occupational world, which you then continued through with various different companies, TMP, and then we fast forward to the time that you and I first met up, Jill, when you were at Amberjack global projects, particularly in the early careers and I remember very well you working on the Unilever graduate program. 

But today I'm going to use the opportunity to extract all the things that are in your wonderful mind about what makes task-based assessments so different and how they provide a very and exciting way to understand people and their suitability for different roles. So let's kick off the podcast with how you and I first met, those many moons ago in Citibank offices, I seem to remember in Canary Wharf, where I'd invited you to talk to a few of the early pioneers of game-based assessments and how you could learn about people through the way that they do things. So why don't you share with us what was going through your mind when you first came across Arctic Shores and this very different approach to understanding people's personality strengths and motivations. 

Jill: Thanks, Robert. That's a lovely introduction, and it's my pleasure to be here today with you. In terms of what was going through my mind about task-based assessments, I was thinking, wow, this is like a really interesting and different way of doing things. I mean, having worked in the assessment world now for nearly about 20 years, I had never seen anything like it with regards to what you were originally proposing. I was intrigued by that. 

What also interested me was the fact that I thought it could solve some of the challenges that we constantly face in the assessment world and the development world with regards to using existing traditional psychometrics to try and understand people's skills and behaviours important for success at work. What we used to have to do was always triangulate the data and use lots of different questionnaire-based approaches because you're always relying on someone telling you, yes, I'm resilient or yes, I'm organised and I plan well, similar to an interview. And obviously, it's self-reporting so they can change the way that they answer the questions in a more favourable light. 

So it was always about trying to understand lots of different data and trying to gather that data to be able to get a clear and as accurate picture as possible about individuals that you could then present to the recruiting organisation, the employer to say that this person is a strong fit for the role. So when you told me that you were using tasks to try and understand that and tasks based on decades of neuroscience research, I was thinking, wow, this is like going to tap into the primary evidence, that directly observable evidence that I've been searching for, working in the field of occupational psychology. So that's what really intrigued me. 

Also it was extremely pioneering, and you know at the time, sort of 10 years ago, the word gamification was around, and in Amberjack we were looking at gamification from the perspective of making things more interesting and engaging for candidates across the end to end process, but there's various ways that you can do that through reward and how you communicate with them. But from an Arctic Shores perspective, I think gamification, really what you meant was interactive and dynamic and something that gave candidates a good experience. So for me, that's why I was intrigued by Arctic Shores and why I wanted to speak to you and why I agreed to come along and speak to Citigroup that day at Canary Wharf. 

Robert: Yes, well, thank you for sharing it. 

Jill: And the rest is history really. 

Robert: And it is, and it set off a wonderful chain of events and you've now been with us for seven years. But just going back to that sort of early process, point, it must have been quite challenging for you to start getting your mind around after you had all that training, all that traditional research and ways of doing things, very much embedded in how you do things and how the world did things at the time, to then say actually there is a difference approach and was it just the challenges that you were seeing with traditional assessments that prompted you to say, actually maybe there is a better way here or was it just, oh this is gamification, it sounds sort of quite interesting as something just to go and explore. 

Jill: Yeah, I think it was multifactorial. So looking at customers' challenges and the fact that they wanted to give candidates a better experience, to eradicate bias, to look at adverse impact across demographics and that was really important to customers and emerging 10-15 years ago as a real problem. They wanted to level the playing field for people and I think another area was just that you know trying to be really accurate about matching that individual to the role and having to do that in a number of different ways historically using a number of different questionnaire-based tools and aptitude lots of different data in order to get a fairly rich picture, I suppose, of how someone is likely to behave in a role. 

And when I worked in executive assessment, the question that I was always asked and put under pressure by boards, presenting to boards, was, well, can you predict 100% how this person's gonna behave if they have to lead our organisation through a change, or if they're in a very challenging or difficult environment, how are they gonna respond? And my answer would be, we can never 100% predict how human beings are going to behave in any way. But all we can do is look at the data and say that they are likely to, and this is the evidence that we've got. 

So it was multifactorial. Gamification was part of that because I think that fed into the candidate experience and making things interesting for candidates and giving something back for candidates. And my experience in the early days of assessments where that candidates, at whatever level of role they were applying to, didn't used to get any feedback back at all which I think is just a pretty poor experience when you're dedicating a lot of your time to actually complete a questionnaire, or to complete an interview or whatever it might be, to not get anything back, no feedback back about that is just - not great from a candidate perspective. So it's all of the above really. 

Robert: I was shocked when I discovered that. Obviously, I come from outside of the industry and what I suppose that one of the game elements that sort of brought very much to play and felt that that could bring some real value into the psychometric world, was that focus on the candidate experience. And that extended not just to the user interface, but also giving some feedback. And I remember at the time talking with, say, my co-founder, just being shocked that there was no feedback that was given.

And I think you highlight there two really interesting things about, I suppose, what we set up to achieve with Arctic Shores at the beginning. One was, how do we deliver something that's a better candidate experience? But it has to be more than that. I like your point about primary evidence. And I think that's what brought that connection between us. That you, from a strong occupational psychology point of view, it's all about how do we get slightly better prediction. We're never gonna get it perfect as you, you know, you're so well articulated. But we certainly want it to be better than just human intuition on this. And so how do we get under the surface of what really motivates somebody and what their instinctive innate, as you say, primary drivers are going to be. 

I think that's really what the Arctic Shores assessment was about. And partly why we changed it from game-based assessment to task-based assessment. And I'd like to just talk through with you that sort of subtle change because that's how it started with us. The fundamentals haven't changed. We're just changing the language. And how does that for you change, going from a game-based assessment to a task-based assessment? Is it just a language change in there, or is it something more fundamental? How do you see that? 

Jill: I think it's something more fundamental. So I think the language change because I think originally with game-based assessment, it perhaps would lead customers to think that it's gonna be too gamified, too much fun, it's not, candidates are not gonna take it seriously enough. When actually it is a serious assessment, it's assessing people's skills or soft skills, the success criteria important for success at work, across cognition, and across behaviours associated with personality. So it is really important, we want candidates to take it seriously and apply themselves when they're completing the assessment.

So the language change is important from that perspective and it also just sets candidates up for success as well. I think they get them in the right sort of frame of mind in order to complete the assessment. Task-based assessment to me means something completely different. So the movement towards task-based assessment to me is asking candidates to complete and using the advancements of technology to do this. Because years ago, you know, this is what the NHS used to do, using these tasks face to face with people to try and understand what's going on in their brain and individual differences. 

But using the advancements of technology, using dynamic interactive, puddly type tasks in order for a candidate to complete them so that we can get that directly observable evidence of the candidate completing a task and them demonstrating their determination through that task, demonstrating their perseverance, their ability to overcome adversity, obstacles, all the important things at work that you want to see. So you want to see someone demonstrating that in front of you, and we're able to do that with tasks rather than asking them questions. So the drawback for me with the question-based style was always around, okay, secondary evidence I'm getting, the person's telling me, yes, I'm resilient, but of course, if you really want to understand how someone's gonna behave in any situation, of course you would rather observe them doing it and see them doing something rather than telling you because there are all sorts of problems associated with that. 

So I think the movement towards task-based assessment really is just a great umbrella term to convey exactly what we're expecting candidates to do and the output out of that as well. 

Robert: I think that's really interesting, Jill, and you highlight that actually the world of occupational psychology, which really started to kick off in the 1950s began its roots from observing how people did things. And what held people back from that was being able to scale that. And hence we started using pen and paper and asking people to tell us how they might behave rather than observing them. I think it's really interesting how the point about task-based assessment is that they're leveraging the advancements in technology that have happened in that time. But one of the things that you must come across all the time, and it would be interesting to see how you explain this to clients, how does pushing a button mean that I can understand what your personality is? Because it just feels so different from what we're used to, which is if you want to understand about me, Robert, then you need to ask me, Robert, about how I do things, because I'm surely the best person to tell anybody about what I'm good at and how I might behave. And so this whole idea that you could be pressing buttons and really learn about me, how does that work? 

Jill: Yeah, well that's a really good question, asked by prospects, you know, you get asked a lot, we get asked a lot even by customers that are already on board with us. And so maybe I'll just explain from my perspective, you know, how task-based assessments work and then bring it to life with what was, through one of our actual tasks. So a task-based assessment is working through the fact that someone is interacting with these very short tasks. And what we're doing at Arctic Shores, and I don't wanna give away our secrets, but what we're doing is basically those data points. 

So we gather about 12,000 data points from candidates in the way each individual candidate, this isn't multiple candidates, and in the way that they actually interact with the task itself. So that is, you know, how do they click, you know, how are they clicking the buttons? Do they pause when they get some negative feedback about the way that they've interacted with the task? Do they take quite a long time in relation to other people to read some part of the task or make some different type of response? All of that type of our data, those 12,000 data points, they feed into our scoring keys on that particular task and then those scoring keys are validated in the way that you would validate any other psychometric tool. So our psychometrics team interestingly is made up of not just occupational psychologists but neuroscientists, data scientists, psychometricians, statisticians, people that have got maths backgrounds. 

So they're working together as a collaborative unit in order to understand these scoring keys validate them so that we basically know that what we're claiming to measure is actually what we're measuring and we're doing that in a reliable way. So that's how a task-based assessment is different and at Arctic Shores we never give away the secrets of our scoring keys to anyone. We tell customers and we go through with customers how we build their what we would call like their success profile so what they're actually screening or sifting candidates against in terms of what does success look like, what does great look like in the job and and they can keep that profile and we can tweak it and make changes as the data evolves. 

But the screening keys themselves are the sort of magic behind what is validated against well-known and established constructs that have been around for a long time. The neuroscience tasks Arctic Shores haven't just made them up as you're very well aware. The psychometrics are science, scientists have based this on 30 years or more of neuroscience, so decades and decades of neuroscience research.

These are well-known neuroscience tasks that have been used in hospital settings, research settings in the past in order to understand how someone's brain is working, what parts of an individual's brain is working when they're making certain, you know, going through certain cognitive processes, making certain decisions, judgments, when they're actually behaving in a different way towards different situations and different people. So they're well-known tasks. We have just, with the advancements of technology, brought them to life, you know validated them and made them work in a setting where you can use a mobile phone, you can use a tablet, you can use a computer to complete them and we've made them obviously look nicer and a lot more candidate friendly. 

So that's you know the sort of high-level science behind our assessments and if I bring it to life a little bit more around maybe just an example of one of the tasks, quite a well-known one that everyone is familiar with is our security what the candidate needs to do is interact with this task and try and stop the dial moving on a certain number in order to unlock a series of sort of codes or logs to unlock the safe. So the dial moves around at varying speeds and changes with the way that the candidate interacts and they have to, the number that they've got to stop it on changes as well. So they basically have to press a button on their phone or on their computer, the space bar, spinning dial when the number flashes up in the correct place. And as I say, it moves at different speeds, it changes, and it interacts with them as they're interacting with the task. 

Obviously, with that type of task, candidates will, you know, it's dynamic, it's interesting, it's exciting, it's like a puzzle, so candidates engage with it really well. And then they start to understand, oh, actually, you know, it's changing, and actually, I need to speed up. So we're measuring lots of different aspects that are important for success at work. So from that description, hopefully, you can see that we're measuring sort of core cognition, processing speed, you know, as the dial's moving round, you have to be thinking quite quickly in order to interact with it and click on the right place. We're looking at how someone perseveres, so if they get it wrong and it flashes up that that wasn't the right code as they've clicked on it, how do they then respond, you know, how long do they take to then get back into the tasks? We're measuring sort of determination, perseverance, all those attributes and sort of positive success criteria important for success at work linked to things like delivering results. So that's essentially how our tasks work and that's an example hopefully brought to life through Code Breaker. 

Robert: Yes, thank you for sharing that. So I get that, so it's the micro shifts in behaviour that's giving an indication of how they're thinking. And of course that all makes sense that your personality at the end of the day is nothing more than the chemical and electrical activity that's going on in your brain. And so if you can find a way of picking up proxy measures for that chemical and electrical activity that's going on in your brain through pressing buttons, then you can understand what's happening in somebody's mind.

One of the worries that I'm sure many people still have about that, so when you explain and the way that you've explained it starts to make sense that we can learn about people from that way. But the big worry about anything that's task-based and particularly in the way that you just described that last task is that somehow there's some sort of skill that's involved in it and how does the science behind the way that Arctic Shores have developed that, avoid that challenge that we end up measuring somebody's ability to quickly press a button and respond to it as opposed to largely what's going on in their mind. 

Jill: Absolutely. So as you would expect from the team that I just described, our psychometrics team, you know, they're professionals and from a lot of diverse backgrounds, they follow psychometric tests, publishing guidelines, you know, through the British Psychological, our assessment. So they're going through exactly the same methodology that you would take if you wanted to design a new personality questionnaire. So they're doing all of the same things that you would expect other test publishers to do when trying to make sure that the science is actually there and it sits behind it. So that includes testing before we actually even went live with the R-texture assessment all those years ago. 

I remember speaking to you in the very early days before I came on board and the research team actually spent a good of years actually just researching and building out these tasks and validating them and collecting all that evidence for the technical manual. So they are doing group differences studies to understand if are there differences between people that may be younger and that we might think might game more. Obviously, these are all anecdotes on some of these things are myths and so what we do is we've tested all of that to understand are there any differences across different groups of people and depending on people's backgrounds there might be differences between male and females, you know, what does that look like and are there group differences there? And our research shows that from, you know, you know, have you got an advantage if you're really good at clicking buttons? No, that's not the case. 

There is not a difference between someone who's got hours and hours and hours of gaming experience and, you know, traditional gaming of playing Nintendo or PlayStation or whatever versus someone that doesn't. Yes. So. Yeah. So there are not statistical differences there. So we've done all of that and that's all documented because that, you know, that can be a worry for organisations, but the tasks work in the same way that they would in the medical setting as well. And they're not, all these years ago when they were used in hospitals and things, it was the same principles. So someone that. And also they work equally well for men and women, and there are differences. I'm not saying there aren't genuine differences in personality traits between men and women, because there are, but that's a whole different podcast.

There are some genuine differences for good reasons, for genetic and biological reasons, but in terms of like gaming type differences or differences between people with different backgrounds, our research shows that they're not significant. And that's documented in all of the research and the technical manual that we have. 

We also have adjustments in place, because another question that leads on from that is like reasonable adjustments and what can we do for candidates that have disabilities, what can we do there? So we make reasonable adjustments and our psychometrics team again is constantly collecting data and research from the demographics that we get through our assessments with the millions of people that have completed them now so that we can make adjustments in the right direction for people. So that's what we do, we can apply scoring key adjustments for people who have got dyslexia, or dyspraxia, there's different what adjustments or accommodations that you might need to. 

Robert: What's interesting I think that the way that we accommodate the neurodivergent community is different from traditional assessments. And I'm going to explore in a second a bit more about some of those differences. But because it's task-based rather than question-based we're not presenting people with an extra bit of time or something in order to be able to process something that they struggle to process. Rather the tasks, anybody complete the task, but to ensure that there are any cognitive differences that come about because of neurodiversity, that people who are in the neurodivergent spectrum aren't in any way disadvantaged and that can be done behind the scenes rather than presenting them with a challenge as they face what they're being asked to do and then them having to accommodate as well as the assessment having to accommodate as well. 

So let's, thank you for that explanation, let's explore then a little bit more about having explained the science, you must all the time be asked about what are the similarities and differences between a task-based assessment and traditional assessments? Because are the way that art, is the way that Art at Shores does its task-based assessment so fundamentally different from question-based assessments that it means that people have to get their mind around a completely different style and approach and personality framework? Or actually are. There's some overlap, but there's also some differences. Perhaps you could share your perspective on that. 

Jill: Yeah, that's an interesting way to put it. I mean, I don't think it's so wildly different in terms of the framework that sits underneath. I mean, we're still adhering to, we still believe like what the rest of the research community believes around personality factors and models, and we're using all of that best practice research as well. So the fundamentals of what we're doing and what we're trying to measure, and we're not saying that we're measuring anything that's wildly different from what a traditional psychometric provider would measure in terms of what's important in the world of work. From a behavioural or a soft skills perspective or behaviours associated with personality and cognition. So that's similar. The fact that you can use our assessment in an early stage SIFT to support with a number of different challenges, that's similar. But I would say that that's where the similarities end actually.

I've racked my brain, Robert, and those are the similarities. In terms of differences, the main differences are it's task-based, it's dynamic, it's engaging, so that brings lots of advantages. Increasing advantages nowadays as well, which we can talk about in a moment with the advancements of generative AI, but the advantages that that brings for employers are it's dynamic, it's engaging. We've got evidence to show that it reduces anxiety for candidates, it's not timed.

So therefore, the candidate experience is optimised. They get a candidate feedback report within 30 seconds of completing the assessment. So again, candidate experience is optimised. Diversity, we do a lot of work on diversity, and I know that's one of the reasons you and Safe founded the business in the first place. You do a level playing field across all demographics. So we do a lot of work around adverse impact and running our profiles through models before we actually go live and a lot of work on our platform with regards to that as well that we've just talked about around reasonable adjustments and the design of the assessment. 

So we're leveling the playing field so employers will be able to achieve their diversity statistics which we've seen with the likes of Siemens. You know why Amazon is interested in working with us now because they want to make sure things are fair and get the same number of females and males to the end stage of their processes. They want a diverse mix of individuals that they can appoint to roles. 

Robert: I'd like just to, I think just to talk a little bit about that because at the end of the day, what makes a good selection tool is whether you are first putting forward better-suited candidates to hiring managers to select from, and then ultimately do they perform in the job better than either the previous process randomly bringing people in. Can you perhaps share then how a task-based assessment has brought about some improvements or differences with some of the clients that you've worked with over the years? 

Jill: You make a good point about predictability that was going to be my third one. You know you want to be able to identify the candidates at an early stage that are very closely matched in terms of matching your success criteria that's going to help and then you want to be able to make sure they do perform in the job, and is there a correlation, is there a link there? So we do a lot of work around that, around the predictive analytics there. And I think that's hugely important. So I would say that that's something that a task-based assessment allows you to do because you're gathering that directly observable evidence. We talked about the 12,000 data points that we get per candidate.

We've got huge, huge volumes of data there per candidate we can either use data that we already have that exists to show them and demonstrate to them, look at the predictive nature if we run this, you know, success profile through this model that we've already got, look at the outcome that's given other customers in similar sectors to you. And then we also do that when they're onboarded with us, you know, as part of their ongoing sort of customer journey as well. So we look at their points of value and we demonstrate that to them as they're on that hopefully continuous journey with Arctic shores. 

Robert: So they get that predictability. And that's really important. Jill, one of the other points, just picking up on something that you said earlier, was 12,000 data points. It's a huge amount of data there. And one of the things that people will be concerned about is that, gosh, this huge amount of data, complex and difficult as to how...Hiring managers or organisations are able to use then that data to help them find who are the right candidates to be talking to in hiring? 

Jill: So the short answer is no it doesn't it's very simple and that's also one of the sort of founding principles of Arctic Shores was to try and bring psychometrics to the masses, make it understandable, user-friendly and you know free from jargon and that's one of the reasons why I wanted to join you and say from the place because I think it's really important that if we have these tools everybody should be able to use them and get insights from them and it should be you know across society. 

So it's very simple with our platform and within sort of three to four sessions which can vary from 30 minutes and to 60 minutes each depending on what customers want to achieve we can onboard them and that's their training and how to use our platform how to set up their assessments how to invite their candidates and how to interpret their reports because we've as simple and as jargon-free as possible. In terms of the outputs that customers receive, we've talked about the candidate report, so candidates get that. 

Customers can view on their dashboards on the platform, you know, ranking in terms of if they want to sift their candidates, who's coming out as top, from an overall score on their success criteria. It's also broken down across the areas that are important for them, so we talk about sort of essential criteria for the job, so very clear scores in that respect and they also get reports for their hiring managers and their TA teams, which give a lot of insights into the individual candidates' strengths and development areas. And then last but not least, if they want to use it, they've also got an interview guide as well. So we try and make it as simple and as user-friendly as possible and as easy to start get up and running and start using the products in the first place. 

Robert: Yes, so no big training, no special qualification. I remember right at the beginning, being surprised and somewhat shocked that for anybody to start using a traditional psychometric assessment they had to go through a two or three-day training program of several thousand pounds in order to be certified, which would then enable a person to then give feedback on that rather than just thinking well okay how do we just make the language simpler so that everybody can understand. 

Jill: And how do we put safeguarding around that to make sure that yes absolutely our tests are psychometric tests and they're serious tests, but we've got a lot of safeguarding around that for customers so that they don't misinterpret anything or they don't use the data in the wrong way. And you know, that's our platform has been specifically designed to support that and ensure that they don't fall into these pitholes and you know, trip over themselves.

Robert: And I think that's one of the other important points around this, which is that people don't misuse the data the other part of all of this is that there's a training to be able to give feedback, but there's also a requirement in the design and the use of any psychometric assessment that an individual perspective or bias or just misunderstanding about what the data is or means results in poor decisions being made or decisions that don't actually relate to or don't have any connection to what success looks like in the role. Now, how does the Arctic Shores platform? 

Jill: So there's a lot of guidance on the Arctic Shores platform. Customers can self-serve, but there's a lot of guidance and safeguarding around how they do that. We've got a number of norm groups that are appropriate across the world, and we give guidance as to which, we kind of force which ones should be used and you know set of questions that we're asking customers. They are handheld through their onboarding, so they're paired with a customer success manager from the customer success team and a business psychologist from professional services to onboard them and make sure that they're comfortable. And some customers want more onboarding than others and we'll spend a little bit more time with some customers than others. But again, there's a set sort of best practice onboarding that we will do and that's the sort of, you know fundamentals for everyone. And then over and above we, they are and they're hiring for potential or skills-based hiring journey, we will then support them from there. So there's a lot of support and guidance, a lot of digestible content on the knowledge base that they can go back to as well. So it's a bit of a sort of continuous loop and continuous support that we provide at Arctic Shores. 

Robert: But the world is all being turned on its head at the moment now, Jill. 

Jill: You're very interesting times, Robert. 

Robert: I like to point out and had a few discussions on these podcasts with people being like a calculator that can level the playing field but also can reduce individual differences if you take a traditional recruiting selection approach to it. How do you see that task-based assessments are going to help organisations deal with generative AI, both in terms of selection, but also one of the things that we've talked about a bit here is it's not just how they measure, but it's what they should be measuring.

Jill: Absolutely, I mean, I think it's very interesting times. I know generative AI and chat GPT is one of your favourite topics, and so it should be for most people, actually. I think it's gonna transform the world in which we live in. I'm reading a book at the moment actually called Scary Smart. And basically by 2049, Mo Gawdat, who was a commercial director within Google, says that artificial intelligence is gonna be a billion times smarter than the smartest humans. So to give you a comparison there, that's like human beings having the intelligence of a fly and artificial intelligence being as intelligent as Einstein. So I think, you know, AI and artificial intelligence in general has got implications for the world in which we work, the jobs not only that are going to be needed but also how do we actually get the right fit candidates into these roles in the first place and what are the skills that are important for the world of work in the future. 

I think with regards to ChatGPT and how that's impacting recruitment processes and why talent leaders should be paying attention is because we've seen our own research and have already started to hear from our customers and prospects that they are noticing differences at the later stages of their process because candidates are using ChatGPT as they should in order to try and support themselves throughout these processes. 

So we saw from our research that with some traditional psychometrics, you can use ChatGPT to get the 98 percentile on verbal reasoning, numerical reasoning is similarly extremely high and at the moment ChatGPT just has a slight issue with images but again it's advancing every day so it's going to overcome that and in a matter of months I think we will see it be able to digest images and understand that. 70th percentile on situational judgment tests and it's learning every day so again that will continue to improve and  70th percentile on the bar exam apparently and the legal bar exam so I think ChatGPT's got an IQ of 152, they say. Average human IQ is 100. And again, it's advancing every day. 

So I think we've got a real issue on our hands with recruitment processes. How do you know you're getting an authentic assessment of an individual or if they're using ChatGPT to get through the initial stages and then later stages of your recruitment process? The one easy answer to me is ChatGPT cannot break a task-based assessment. 

So you use a task-based assessment and get an authentic assessment of the skills required for success in the roles that you're recruiting to. That would be my answer. 

Robert: Great thoughts. And I, you know, when you shared that first stat from Scary Smart that artificial intelligence will be a billion times more intelligent than the average human. That creates, I imagine for a lot of people, quite a lot of concern. And are humans going to have any role to play in knowledge-based work? Where I personally am sort of reassured that humans will continue to have a role to play in the future is things like curiosity, creative problem solving, these are the things that the human mind, just because we have a worldview framework that I think artificial intelligence is a long way off from being able to mimic or even to replicate. And so you can have something that's a billion times smarter and we see that in chess the sort of cognitive games, Go being another one. 

And I was pleased to see that earlier this year that they did some research in Berkeley, California, the university there, where an average Go player with a little bit of creativity and problem solving was able to beat the best artificial intelligence Go computer program. So there is still hope for us and I think it comes back to where different types of human skills will now be important in the future from what we used to value in the past.

Jill: I think that's where I totally agree that curiosity, innovation, potential creativity, interpersonal skills, relationship management, all of those types of human skills that we see that we can measure task-based assessment are going to be continuously important in the world of work, where we're going to have computers maybe replacing some more repetitive or intelligence type tasks that used to take us a very long time to do, but there's absolutely still going to be a place for people. I mean, I have to say that, I'm a psychologist. 

But yeah, no, I agree with you around that skill set for the future, and I think it will be ever-evolving and ever-changing, but definitely in that space, that interpersonal skills relationship management type space and so that's why it's more important than ever to try and identify those skills at very early stages in an authentic and genuine way. And make sure you keep those people in your process and they can contribute to your organisation. It's about how people are gonna optimize and work with machines and artificial intelligence I suppose in their roles in the future as well so you know that and sort of approach to how do you use the tools that we've designed and that we have available, how do you use them to your advantage to optimize the world of work? 

Robert: Yes, you make a great point there, Jill. And I heard the other day, you know, the term that AI rather than artificial intelligence should really be augmented intelligence. And that's really what you're saying there, that the human skills will continue to be valuable, but we will be able to get some of the pieces done by artificial intelligence now but it'll just help us do the things that we need to do in a better way and less reliant on cognitive processing. 

So Jill one of the other things that you mentioned in there was some of the challenges around bias with traditional assessments versus task-based assessments. Nobody is building in bias, of course, into the way that they design things. But just the very nature of different styles of assessments means that some areas are better served through a task-based approach than through a question-based approach. So can you share with us how some of those differences have been improved and some of the challenges, I suppose, with timed tests and the pressure and anxiety that being timed causes and how a task-based assessment overcomes some of those. 

Jill: Yeah, sure. So the task-based assessments associated with the personality tasks that we have, they're not timed, so that takes some of the anxiety away. As I mentioned earlier, we make a number of reasonable adjustments, which of course so do traditional test publishers, but we also try and continue to build on the data that we have because we have you know, millions of candidates that have completed and we've got 12,000 data points each time. So we look at that data constantly and understand if any differences are emerging and then how does that impact on our methodology around reasonable adjustments. 

What we also do is because we have got a lot of this data is that we're looking at adverse impact and bias against success profiles. So against what the customer wants to sift against or what they want to make their decisions against customers go live and we are in a very collaborative and partnership way with the customer removing that bias from our models before we go live and I'm not going to go into that in too much detail because that gives away some of our and sort of unique methodology as to how we do that but we do that before customers go live so that we are using proxies and we're trying to predict with huge samples this is what it looks like on this demographic group and you know or if you set your benchmark at this level, at the 50th or above, this is what will happen. This is what it potentially could look like. And then customers can work with us to make their decisions to say, actually, I don't want some of that creeping in, so I'm gonna dial that down, et cetera. 

So we do that before we go live. And then as part of our continuous cycle of improvement and customer relationship management, we also look at it on their live samples as well. So yeah, it's just a continuous process of looking at that data and eradicating that bias because that's very central to what Arctic Shores does. And it's one of the, as we mentioned, the founding principles of why you and Safe set up the business. It was to level the playing field and have an assessment that's truly fair. 

Robert: Can I just pick you up on one point about that? A couple of times you've said, you know, the magic and the secret behind, don't want to reveal too much about that. And one of the concerns many people have is, some sort of artificial intelligence into the way that he does things. And is there a black box element in there? Is that? 

Jill: Yeah, no, it's not a black box. So what, none of our algorithms are black box. They're what I would call, you know, open and transparent. And they're so transparent that we share everything with the customer. So we are presenting exactly with the customer. This is exactly what you want to assess someone on. This is what your profile is going to look like. This is how it fares in terms of, against these demographic groups against the four-fifths rule. This is what it will look like if you put your pipeline of candidates through this. This is the volumes, this is how they will come out. So it's very open and customers make their decisions about finalising that profile, how they want to use it, and then the benchmark or the cutoff that they're setting. So it's completely transparent and completely open. 

And one thing about Arctic Shores, which is very important to me, and I think you know this when I joined, I don't want to be including anything in these models or algorithms that don't make psychological sense. So if it's not job-related and it can't be explained from a psychological perspective that that particular behaviour, that particular skill is important for success at work in that particular job or that organisation, we shouldn't be including it. If one of my psychologists can't explain what's the rationale behind that, then we shouldn't include it. And it should be repeatable as well. It's not, it's not a model that's built that's so obscure from a data perspective that we can't repeat it each time with a similar sample of candidates. So that was really important because I don't want to start building models that we just put anything into because if you throw enough things into them then eventually something's gonna stick. So yeah, it has to make psychological sense and it has to be job related. So yeah. And we have to be able to explain it to customers in very simple language so that they are comfortable, they know exactly what their candidates are being assessed against.