Any time you’re looking for a new product to solve your problem, we realize that the comparison process can be a little confusing, especially if you’re not an expert in the field. There’s a lot of buzzwords floating around and it’s hard to separate the wheat from the chaff.
Like with any problem, there is always the ‘build vs buy’ question. Should you create and maintain your own solution internally, or should you use a specialised product that’s already built to solve that problem? Should you take the DIY option, or call a professional for help?
This is a definitive guide into using Alooba Assess vs DIY take-home projects for analytics and data science. This guide provides an in-depth analysis of what’s involved with building and maintaining your own DIY take-home projects for data roles vs using the established, specialized product Alooba Assess. We delve into all the details of the pros and cons of each approach, and hopefully cover off some areas you might not have thought much about. Got questions? Feel free to contact us here.
Alooba Assess is unique, being the only skills assessment platform tailored specifically to data skills. That is the fundamental difference between Alooba Assess and all other products that you might be evaluating.
Looking for a blow-by-blow of Alooba Assess’s functionality? Check out a full rundown of the features and capabilities of Alooba’s various products here.
Looking to assess your candidates for data roles? Get started now with Alooba Assess.
Many organizations we speak to already have some kind of manual testing or assessment measures in place for data roles like data analytics, data science & data engineering. Normally these are a little crude and primitive, and exist within silos in the business, with individual hiring managers having their own tests. There is no real consistency and it’s basically down to each hiring manager what they want to do and how they do it. It’s the wild west in some ways.
These analytics projects are normally consistent of some cobbled together datasets from the business or sometimes an external source (e.g. Kaggle) and some questions & instructions written in a Word or Google Doc. Some managers choose to host their datasets externally, for example on GitHub instead.
Someone in the talent acquisition team or the hiring manager will manually email the candidate the assessment details when they get to the ‘assessment stage’. The candidate will complete the assessment in their own time at home. The assessments are normally untimed with the candidate told something like ‘Try not to spend more than 5 hours on this’. The candidate will normally then spend as long as they are able to over the week or so that they have to complete it. Once done, they will then email back the results, or share the hosted file with their results (e.g. on GutHub, Google Drive etc.).
Normally then a senior individual contributor in the team will then receive and grade the assessment. Depending on the complexity of the assessment, this will be 30-60 minutes of effort. They may or may not then share feedback with the candidate on how they went. Normally this would be done via email, and it might be channelled through the talent team or recruiter who is dealing with the candidate directly.
Some organizations also pair the take-home project with an interview, where the candidate will then present their findings, and the interviewers can also drill into the candidate’s approach and delve into why they approached the problem the way they did. This also affords an opportunity to provide live feedback in the interview itself. Of course, this is only for successful candidates who get to that stage of the process. Some others who failed the take-home project won’t get the luxury of this live feedback.
Alooba has several products. Alooba Assess is used by organizations to assess the skills of data candidates and is the focus of this comparison article.
Alooba Assess is unique, being the only skills assessment platform tailored specifically to data skills - the skills needed for data analyst, data science & data engineering roles. That is the fundamental difference between Alooba Assess and all other products that you might be evaluating.
There are two main places to use Alooba Assess to assess their data roles.
First, right at the top of the funnel, as a substitute for manual CV screening. Manual CV screening is known to be inaccurate and biased against people from certain ethnicities. Any manual process is also expensive and bogs down the hiring process, increasing the time to hire. 95-99% of candidates get rejected at the CV screening stage, and this is where the biggest bias and problem is in hiring. This is the best place to use Alooba as a short screening quiz, as it impacts every candidate.
Second, Alooba Assess can be used further down the funnel, as a replacement to DIY take-home projects, which is the focus for this comparison article. DIY take-home projects are normally used as a penultimate stage in the hiring process, on a limited set of candidates (3-5). Because of the lack of scalability of managing these take-home projects, it’s generally not possible to use them earlier on in the process, without blowing out your hiring costs and reducing your time to hire.
Some organizations choose to use Alooba Assess in two stages of the hiring funnel - a quick initial screening quiz and then a more in-depth assessment later on.
For DIY take-home projects meanwhile, as we’ll see later on, there’s a heck of a lot of manual effort in building, administering and maintaining them, which makes them fundamentally unscalable. Because of the lack of scale of these assessments, they’ll normally be used towards the end of the hiring process, on a small number of candidates (say, 3-5). It would be impossible to, for example, give every candidate the opportunity to complete the assessment. This excludes 95-99% of applicants from getting the opportunity to show-off their skills.
As we’ll elaborate on below, there are a few situations where it does actually make sense to use a DIY take-home project rather than Alooba Assess.
The best situation to use a DIY take-home project is when:
In this specific scenario, we’d recommend sticking with your own take-home project because it probably will not be worth your effort to move to a better solution, as you rarely experience the problem of hiring and selecting the best candidate, and it would actually slow down your hiring to pause and onboard a new product into your hiring stack.
Alooba Assess is still a solid option for infrequent hires, as long as you are able to easily onboard new products in your organization. We have a ‘Starter’ option for per position hires, so feel free to explore that.
Ultimately, any build vs buy decision really boils down to four factors: cost, quantity/volume, speed and quality.
Cost: what is the true total cost of using Alooba Assess vs a DIY take-home project? Volume: how many candidates are you actually able to give a fair opportunity to and assess their skills? Speed: how long does it take to get up and running with Alooba Assess vs a DIY take-home project? Quality: what is the difference in quality between using Alooba Assess vs a DIY take-home project?
As you’ll see from the analysis below, the biggest advantages of using Alooba Assess are reduced cost, increased volume & improved quality.
Depending on your organization and how easy it is to use external software products, using Alooba Assess may be faster or slower than using a DIY approach.
In summary, in 90% of situations it makes sense to Alooba Assess rather than a DIY take-home project because it’s cheaper, you can assess more candidates, it’s faster to start and better quality.
In general, every problem seems simple when you haven’t delved into the details, and it’s good to be aware of the Dunning-Kruger effect. We’d recommend generally sticking to your element and don’t bother reinventing the wheel.
Looking to calculate your cost saving of using Alooba Assess vs a DIY take-home project?
The full cost of DIY take-home projects can quickly add up. We've prepared this handy calculator for you in a Google Sheet.
Here’s the summary cost breakdown for a DIY take-home project.
You should expect it to cost around $8000USD to create 1 take-home project. Take-home projects are normally specific to a role, so if you have 3 different roles you hire for, expect that to be $24000USD total build cost.
Maintaining a project will cost around $1250USD a year on average.
In terms of actually assessing candidates, expect it to cost around $400USD per candidate assessed. For most organizations then who bring about 5 people to the take-home project stage, this will be around $2000USD per hire.
Using Alooba Assess instead of a DIY take-home project reduces your cost per hire by an average of 80%.
Building your business case for Alooba Assess? Check out our ROI Calculators here.
Let’s do a deep dive on the cost differences between using Alooba Assess and a DIY take-home project.
The cost of building your own DIY take-home project comes in three parts:
Building: this is the cost of establishing a DIY take-home project in the first place. If you already have a project in place and you intend on keeping it, you can ignore this component. Administering: this is the cost of actually operating the project when candidates take it. Maintaining: this is the cost of updating and maintaining the project through time.
For Alooba Assess, feel free to check out our various options here.
Building - $8000USD per project
On average, you should expect it will cost you $8000USD to build one (1) take-home project.
Here’s a detailed outline of what you should expect if you are going to create your own DIY take-home project for your analytics & data science roles. Sounds like a lot of work, right? Yes, it will be.
Defining the role requirements - 1-2 hours
The start of any hiring process is to sit down and think about your hiring goals in detail. Who do you want to hire and why? What skills & experiences do you expect them to have? What do you want them to do in your organization? It may be tempting to skip or expedite this step and start sourcing candidates, but this is a trap. Get the details down on paper and then you’re in a good position to design the rest of your hiring process to align with what you’re trying to achieve. We see a lot of organizations that say they’re looking for X, but then their hiring process is actually geared to finding Y. Try to avoid this cognitive dissonance.
To compile an internal DIY-project you will need to first understand the skills that you would like to assess. Each role you hire for has different role requirements. This might mean you will need a separate take-home project per role that you hire. Do you hire multiple different types of roles? This cost estimate is on a per take-home project basis, so you can multiply this by the number of unique roles you hire to get a total cost of building your take-home projects.
Preparing the data - 4-6 hours per project
Once you have sat down and understood the role requirements, you’re then in a position to understand what skills you would like to assess in the take-home project. For example, for a product analyst role, maybe you’d like to understand the candidate’s ability to query databases, wrangle data, define product metrics, create appealing visualizations and interpret experiment results. OK great, now you need to design a project to assess all of those skills.
For analytics & data science roles, normally this begins with finding some datasets. Most organizations would like to assess their candidates in the real problems that they face day-to-day, and so choose to use their own datasets. To obtain your own data you will most likely need to run some existing reports from your reporting system, or write some custom SQL queries to get something specific from your data warehouse. Once you have a general idea of what you want, you will then also need to sanitize the data.
First, you’ll want to remove any personally identifiable information if it includes customer/user information. In other words you need to anonymize or pseudonymize the data. This includes obvious unique identifiers like names, email addresses, phone numbers and unique identifiers from other systems (tax id, social security etc.). Be careful here, because even heavily segmented aggregated data can potentially reveal an individual’s identity. For example, you might report data about the number of users in a particular postcode, using a particular browser, of a particular gender identity, who had bought a certain item on a certain day. Something that specific can potentially reveal the person’s identity.
Second, you’ll want to remove data that might be commercially sensitive. While it might be normal for you as a head of analytics to know exactly the amount of revenue or leads earned in your organization in a particular segment, you probably don’t want to reveal these to every external candidate. So you will need to clean up the data or fake/fudge the numbers. Faking/fudging the data comes with its own additional complications, as you would need to do this in a way that still renders the data meaningful. Creating realistic & meaningful fake data is actually surprisingly tricky!
Instead of preparing your own data, you could instead use a publicly available dataset, for example from Kaggle, Google Big Query or GutHub. This will reduce the amount of effort to compile the data, but you will still need to wrangle the data (e.g. remove irrelevant columns) and then you will have the added difficulty of not understanding the data. This means you will have to explore it considerably yourself to be able to understand what the metrics and dimensions actually mean. This will make the next steps considerably harder.
Additionally, be careful with how the data is updated. If it’s static, then you can write your questions and answers knowing that the answers will still be correct by the time the candidate completes them. However, if the data is dynamic, you might want to download/cache the data and host it yourself, rather than asking your candidates to access it directly via the source. This means you will be able to control the data and ensure you and your candidate have been analysing the identical dataset.
Preparing the questions - 2-4 hours per project
With the data sourced and ready, now you can prepare the questions that you want to ask the candidate. Some organizations like to just use one big question - ‘Build a model to predict x from these data.’, while others like to provide smaller sub-problems, to help step the candidate through it.
In any case, you will need to think carefully about the questions you ask, to ensure that they will give you the opportunity to evaluate the skills that you have identified as required. If the question doesn’t assess something you wanted to assess, then drop it as it’s superfluous and just wasting candidates’ time. Likewise, if there’s an important skill you want to assess but don’t have a question for, you’ll need to think of one to avoid any gaps. Again, we’d suggest trying to avoid falling into the trap of having a hiring process that’s not aligned with your goals.
If you have individual questions, assigning a mark/grade to each question makes things easier. The question’s grade should basically be a weighting of how relatively important the question is. Without this, you’ll end up treating things that are very important, and not important, the same, and your final score for the candidate will not be an accurate reflection of their performance. And for the candidate, they’ll understand where to focus their efforts best and understand what is relatively more important.
Writing questions is an art and a science, and writing a good question is itself a hard skill. Like anything else in life, if you are doing it for the first time, don’t be surprised if you suck at it. At Alooba, figuring out what a good question is has taken us a while, and we’ve had the luxury of 50k+ pieces of candidate feedback to iterate through and improve our content. You won’t have that benefit if you’re creating your own analytics project.
These learnings culminated in a 12-page expert guide on how to create and review questions. All our questions are peer-reviewed by experts, and this is managed through a customized content management system.
From our experience of creating, peer-reviewing and publishing more than 3000 questions over the past 3 years, there are some common places our content experts have fallen down when writing questions. For the 3000 questions we’ve published, there’s another 1000 or so that have been rejected, never making it on to our site.
Here’s the typical rookie errors when writing questions for the first time, which you should try and avoid:
Preparing the data dictionary - 2-3 hours
You will need to provide at least some basic description of what the data is to the candidate, otherwise they could make any random assumption, leading them down the wrong path when doing their project.
On the light side of things, you can provide a short description of each table/dataset that you provide. This description should definitely include what a row is in the data. Is it raw, row-level transactional data where one row is one purchase? Or is it aggregated data where you have a metric (e.g. Number of Purchases), already grouped by some dimensions (Day, Country etc.). Try to be specific here because subtle differences drastically change the interpretation of the data and all subsequent steps in their analysis.
With this ‘lite’ data dictionary, you’ll leave some ambiguity for the candidates to be able to make & state their own assumptions. Within reason, this is actually a good thing because real analytics is full of ambiguity and candidates will need to make common sense decisions once they join the role - not everything will be documented for them.
For more junior roles, or where you might want to reduce the variance in the projects that get delivered, you might want to have a more granular data dictionary. The easiest thing to do is then have a written description per column of data that you have provided. So, instead of just having a description for the table/dataset, you delve into what each column means. For example:
ClientCountry: the IOS 3166-1 Alpha-2 country code of the customer, based on the geolocation of their IP address when they visited the site. LanguageCode: the ISO 639-1 language code of the user, based on the default language of their browser when they visited the site. Payment: the total amount paid, in USD, converted from the actual currency at the time the purchase was made.
This greatly reduces ambiguity, which simplifies things for candidates, and as a result reduces the variance of what they will provide. It will also reduce back-and-forth once they start the project, because there will be fewer things that they will want to clarify with you. Obviously though, it’s more effort to set this up in the first place.
Preparing the answers - 8-12 hours per project
With the questions & data dictionary prepared, now you need to prepare the answers for the questions. Normally here you should be thinking about a ‘model’ or perfect answer, and starting there. Basically, you or someone in your team will need to actually complete the whole project end-to-end.
We’d strongly recommend that you do not skip this step, tempting though it may be, because without having done it, you really have no sense of how hard or time consuming the project will be. Everybody underestimates how long something will take, so it’s important not to fall into this trap, thinking you’re giving candidates a 5 hour project and it’s impossible to complete in that time. Underestimating how long something will take is a well-known cognitive bias called the planning fallacy.
Your answers should include what exactly you’re expecting the candidate to provide. This should include the output (e.g. a report, presentation, dashboard, working model, application etc.) and whatever are the workings (code, calculations etc.), and presumably some way for the candidate to convey their assumptions.
Preparing the marking guidelines - 1-2 hours per project
Now that you have the ‘model’, or perfect, answers prepared, you need to create some basic marking guidelines. Having at least some basic marking guidelines will help you to be at least vaguely objective when you’re assessing the candidate’s work.
What are the things you’re looking for in each answer? What would be considered a red flag? And in general, what does poor, good & great look like for each question?
You might find it difficult to establish this, without having seen any candidates’ answers yet. That’s fine and normal.
Preparing a benchmark - 1 hour per project
With the marking guidelines prepared, hopefully you now have a general idea of what ‘good enough’ looks like. What would warrant you to say ‘yes’, and advance the candidate to the next stage? This might be an overall score for the project (e.g. 60%), or might be a little more advanced than that. For example, you might want someone who excelled in the SQL part of the project (say, minimum 80%), and then did decently in all the parts.
Really, this comes back to the first step, where you set out your hiring goals & criteria, and defined exactly what you were after.
Preparing the rules & instructions - 1-2 hours per project
Now with your project pretty much good to go, the candidates still need clear guidelines on how to approach the project. We recommend covering off as much as possible to avoid lots of back and forth between you and the candidate, slowing down the hiring process.
At minimum, you should tell the candidate: How long do they have to spend on it? Is there a particular deadline or timeframe? Are there any required tools or technologies that you want to be used? For example, you might require that candidates use a particular version of SQL (e.g. PostgreSQL) or use Python instead of R, presumably because you use these technologies in your organization. Note, we wouldn’t recommend confining candidates to use one specific technology, but rather let them solve the problem the way they wish. How to submit their work (e.g. by sharing a Google Drive, hosting an application on GitHub etc.)? How do they deal with any ambiguity/assumptions that they have to make throughout the project? Who do they talk to if they have issues/how do they get support?
Administering - $400USD per candidate
On average, you should expect it will cost you $400USD per candidate to administer your take-home project.
Now that you have successfully created the take-home project, now it’s time to actually use it.
Candidate Admin - 10-15 mins per candidate assessed
The first step is to invite the candidates to complete your take-home project. Normally organizations will do this manually, one-by-one, by writing an email to the candidate and sharing the folder via Google Drive, for example.
Expect some back and forth with the candidate, as they will likely have various follow-up questions about the tasks, including seeking clarification over what can be done and what the questions mean, and potentially troubleshooting getting access to the data.
Don’t forget about nudging, reminders and extensions. Some candidates will not finish on time and will need prompts. Life will get in the way for others and they will be emailing you about getting extensions etc., and so you will need to deal with that. This can all become a pain, which is why it’s all automated within Alooba Assess.
Grading - 30-60 mins per candidate assessed
Once the candidate has hopefully completed the project and returned their work to you, now it’s time to actually grade the project. When they share the project with you, hopefully they’ve made it accessible (e.g. they used the same Google Drive as you’d sent to them).
Grading will need to be done manually by, normally, a senior individual contributor who has the skills & knowledge to actually evaluate the project. Depending on the breadth/scope of the project, you may actually need to divide this among a couple of team members. This will inevitably slow things down as there’s more people involved, but it’s only fair that the person you are getting to do the evaluation actually has the skills & knowledge needed.
You should refer to your marking guidelines that you’ve established earlier on when grading. It will be difficult to fairly do the grading, given you will know the identity of the candidate. You will probably find yourself rushing through answers of candidates that have done poorly in the initial questions that you’ve marked. This is a bias that you should do your best to contain. We solved this on Alooba Assess with the ‘candidate cloaking’ (anonymization of the candidates’ details) & automated grading, but you won’t have the luxury of that.
Note, if you have allowed the candidate to use whatever tool they wish, the person doing the evaluation will need to be ready for that with those products installed. E.g. if the candidate sends you a Tableau workbook as their dashboard, you’ll need Tableau Reader to open it. If they have sent you an R Markdown file, you might need to know that you can open that in R Studio, for example.
This does add a layer of complexity, especially if you are not familiar with such tools. It also raises the question of fairness - if you are not an expert in those tools, are you really in the best position to evaluate someone else in them? We solved this in Alooba Assess by realizing hiring is a team sport, and nobody knows everything - hence we leveraged the help of 50+ skills experts to create & peer review our content.
Feedback - 30-60 mins per candidate assessed
The single biggest complaint of candidates in hiring is crap feedback or no feedback at all. Don’t be a ghost!
Once you have completed the marking, now it’s time to make a decision and provide some feedback. Given you’ve asked the candidate to commit 5-10 hours of their time (for free) to this project, your feedback should be commensurate with this. That means you’ll want to provide detailed, actionable feedback that gives the candidate the best chance to improve for next time, even if your decision is a ‘no’.
It may be tempting to focus on the candidates who are a ‘yes’, but in a way it’s the other candidates who you need to spend more time with. They’re presumably the ones who need more feedback, and they’re the ones who you aren’t going to hire anyway so have nothing to show for their efforts. Your hiring process is your best opportunity to market your business, and your biggest risk to highlight all the shortcomings of the way you operate.
You may want to provide the feedback in writing, or in a live video call. Both have their pros and cons.
The written feedback can be referred to later, and so the candidate can hopefully use it concretely to improve themselves. It can also be delivered tactfully in a careful way, which might not always be possible live in the moment. If you’re delivering feedback on the project and rejecting the candidate from the next stage, they probably are not going to be in the right mind to absorb the feedback, as they will be demoralized from the ‘no’ decision in a live call. On the other hand, live calls do allow more interactive conversation, allowing the candidate to ask follow up questions and have a bit of back and forth.
Providing meaningful feedback is time consuming (expensive), difficult and also a fine-art. Most people aren’t good at delivering feedback, nor accepting it. Rejecting candidates is one of the least enjoyable parts of the hiring process, so it’s natural that you will feel reluctant to do this and it’s likely it will find its way to the bottom of your to-do list each day. This is a normal reaction, but again, you need to avoid it. All feedback on Alooba Assess is automatically delivered, by the way.
This is also a great opportunity to receive feedback from the candidate about their views on the project and the whole experience. Did they enjoy it? Were the requirements clear to them? Did they think it was fair? How long did they actually spend on it? You might like to put these into some kind of survey form to send to the candidate after the fact. At the end of every test on Alooba Assess, candidates are asked for detailed feedback on their experience, which helps us to iteratively improve the product.
Maintenance - $1250USD per project per year
On average, you should expect it will cost you $1250USD per year to maintain 1 take-home project.
You will need to make some adjustments to your take-home project through time. Your role requirements might change or the problems that you’re asking candidates to do might not be relevant in your organization any more. You’ll also get feedback through the hiring process of things that aren’t working with your take-home project. For example, you might find candidates that start the role and have a skill gap that you’d missed. You might want to add a question in your take-home project to cover that off then. Your candidates might give you direct feedback too, perhaps that the project was confusing, or too time consuming to complete.
Because take-home projects are often done in silos in businesses, you also run the risk of the project pretty much evaporating if the person who runs it leaves the business. You’ll want to avoid having that capital just walk out the door, especially given the costs to establish a good take-home project in the first place is so high.
Using Alooba Assess instead of a DIY take-home project provides a much higher quality experience.
Let’s do a deep dive on the quality differences between using Alooba Assess and a DIY take-home project.
While it should be clear the cost saving of using Alooba Assess vs your own DIY take home project, the other biggest improvement you’ll see is quality. This should be unsurprising as Alooba Assess is a carefully crafted product, developed over several years and thousands of pieces of feedback from customers and candidates.
We’ve included the top 5 most relevant differences in quality between a DIY take-home project and Alooba Assess.
The first thing that Alooba Assess gives you over a DIY take-home project is consistency. With Alooba Assess, the main premise is that you create an assessment and expose all your candidates to the same assessment. You ask them the same questions, in the same order, expressed in the same way, they have the same amount of time to complete it and everything is automatically graded. They’ll have an identical candidate experience, that is fully managed, controlled and monitored, and receive feedback in the same way at the end of the assessment.
This creates a level playing field, giving you an apples-for-apples comparison among your candidates. With a DIY take-home project, this is very hard to achieve because candidates will spend vastly different amounts of time on the challenge, and it’s manually graded by some people in your team. With this manual ad hoc approach, that’s done in silos outside of an actual product, it’s very hard to ensure quality control.
The consistency of Alooba Assess also allows for high quality data to be collected within the product, giving you insights into your hiring process, such as ‘Where in your hiring funnel are candidates dropping off?’ and ‘What are the common skill gaps in your applicants?’. With take-home projects, you don’t have these data and you’re really left in the dark in terms of how to then iteratively improve it.
Breadth, depth & quality of question content
An internally created DIY take-home project will normally include a dataset and a small set of questions. By its nature, this is going to cover a pretty narrow set of skills and topics. You’re also limited by your own knowledge or your team’s knowledge. However, hiring is a team sport and often you’re looking to hire someone with skills that extend your team’s capabilities. How are you going to evaluate them without having the skills yourself?
With Alooba Assess, you can assess your candidates across more than 40 skills in analytics and data science, choosing from our 3k+ questions, all hand-crafted and peer-reviewed by skills experts. This means you don’t have to be an expert - or have any knowledge really - in the skills you need your candidate to have. Our coverage is all the way from basic data literacy, through to advanced machine learning, giving the ability to assess right across the analytics spectrum.
Candidate feedback & experience
Candidates’ single biggest pain point in hiring really comes down to poor feedback delivered slowly, or worse, no feedback at all. Don’t be a ghost. Please.
The main barrier companies report in delivering feedback is just the effort and time involved. There’s often a good intention, but we all know the road to Hell is paved with good intentions.
With DIY take-home projects, someone in your team needs to grade the project, then compile the feedback and actually deliver it. This is a lot of effort, and we’ve found it’s much less likely to be done if the candidate has failed or done poorly on the take-home project. Unfortunately, this is when it’s most needed.
With Alooba Assess, you can guarantee that every candidate receives feedback as soon as their assessment is complete. The feedback follows a consistent structure and is objective as the questions have been automatically graded. Candidates also receive customized learning recommendations depending on which questions they’ve answered correctly, giving them clearly actionable next steps on how to improve their chances of success next time around. Crucially, all of this feedback is delivered automatically, without you needing to do anything.
The DIY take-home project also involves quite a clunky and unprofessional candidate experience. Emails with shared Google Drives and some docs, hardly speaks ‘We know what we’re doing’, and is frustrating for candidates when simple things don’t work.
With Alooba Assess, the whole candidate experience is well-trodden, monitored and maintained. We actively manage the candidates’ experience for you, provide live chat and email support.
A DIY take-home project opens up several channels for unfairness that you should be aware of.
Firstly, there’s several layers of bias that are built in to a DIY take-home project. The projects are normally created by one person in a team. Humans have a well-known bias where we believe that the things we know well are more important than things we don’t know well. This means the coverage of your project will be confined to the skills that you already have in the team, and might not give you a good picture of the candidate’s true skills.
With Alooba Assess, you can leverage the knowledge of 50+ analytics skills experts who’ve collectively figured out the most relevant skills & topics based on their real world industry experience. This gives you a much broader perspective than just 1 or 2 individuals in your team.
Second, a DIY take-home project is also administered with the full knowledge of who the candidate is. This poses a huge risk of bias and descrimination because the person grading or administering the assessment will have their own set of personal biases that can easily spill out either consciously or subconsciously into the grading process.
With Alooba Assess, candidate cloaking is the norm and the questions are automatically graded. All candidates’ performance can be evaluated completely anonymously. This is a must-have for any organization that’s serious about reducing bias in the hiring process. You really need this fairness-by-design functionality to force fairness into the system, rather than expecting people to (magically) remove their hardwired biases.
Finally, the total effort required for a take-home project for the candidate is normally very large. Candidates will typically spend 5-15 hours (unpaid) completing the take-home project. With most take-home projects being untimed, this biases the performance towards those candidates that simply have more time to dedicate to it.
With Alooba Assess, every assessment is timed, and is typically much shorter (30-90 minutes), which makes a smaller commitment of effort for a candidate, and the results provide a fair apples-for-apples comparison of their performance.
With all the data transparently collated inside the product, it’s easy to audit your hiring process to ensure it’s being done fairly and objectively.
With a DIY take-home assessment, it’s very hard to control who is doing the project. You will email it to candidates and then from there, who knows what exactly happens. It’s an untimed, uncontrolled environment that could be easily exploited.
With Alooba Assess, the assessments are hosted by us, and backed by various types of cheating prevention, including the option to require candidates to record snapshots of themselves throughout the assessment. Ask your Alooba account manager about our other advanced cheating prevention mechanisms.
Depending on your organization and your take-home project, using Alooba Assess instead of a DIY take-home project might be faster or slower.
Let’s do a deep dive on the speed differences between using Alooba Assess and a DIY take-home project. Time is money, as they say, and it’s best to complete your hiring as soon as possible, to reduce your time-to-hire, and get your new employee into your organization as soon as possible. This reduces your opportunity cost of not filling the role.
As we explained above, there are a few scenarios where it makes sense to use a DIY take-home project instead of Alooba Assess. Two of these factors are: You work in a large organization with a lot of bureaucratic red tape when it comes to using software products. For example, you might need to go through procurement, data privacy and legal review just to use an external product. This may take anywhere from weeks to months in some organizations. You already have a well thought-out take-home project that works for you. The project has been battle tested, is already fully in place and gives you deep insights into the candidate’s skills, that answers the main questions you have about the candidate.
In this scenario, it will actually be faster to use your own DIY take-home project instead of Alooba Assess. So if you are hiring right now, and need to evaluate candidates immediately, we’d recommend sticking with your own take-home project for this hire at least, to avoid bogging down your hiring process. If you’re hiring further roles and trying to scale out your team, exploring a more scalable solution like Alooba Assess would still make sense for your future roles.
For less bureaucratic organizations, same-day onboarding to Alooba Assess is typical. Following a quick video call & product demo, we’ll get you started by compiling an assessment on your behalf.
If you don’t already have your own take-home project set up, it is likely to take much longer to establish than to use the off-the-shelf Alooba Assess product. Establishing a take-home project takes a considerable effort as we explained above, and this takes place over typically a couple of weeks, with assessment needing to be tested internally first before any real candidates get subjected to it.
By the way, one of the reasons that having a DIY take-home project slows down the hiring process is that your team that does the grading probably doesn’t even want to do it. They’re more motivated by, and interested in, doing data science work rather than marking the same project again and again. It can be a real pain in the neck.
With Alooba Assess, you create an assessment in 2 clicks and integrate it into your Applicant Tracking System for candidates to be automatically invited.
Because of the huge cost in time of manually administering take-home projects, it’s not really possible to do them other than on a small volume of candidates towards the end of the hiring process. The endless amount of manual effort involved makes them fundamentally unscalable to run.
This is actually the single biggest benefit of Alooba Assess. You can use an objective, skills-based test at the start of your hiring process, on all your candidates. Typical CV screening rejects 95-99% of the candidates right off the bat. Why? Because they had an annoying font, a spelling error or their CV was ‘too long’? These kinds of arbitrary rules-of-thumb that are imposed has led to countless great candidates rejected without even getting a foot-in-the-door. You can unlock these candidates by measuring skills right off the bat.
Instead of assessing only 3-5 out of 100 candidates, you can assess 100 out of 100 candidates; that’s a 20x-33x increase in the number of candidates you’ve actually evaluated. This gives you a dramatically better chance of identifying your best candidate.
As you have seen, establishing your own take-home project is rarely going to be your best option, given the high cost, slow turnaround time, small volume of candidates and relatively low quality final outcome. It’s a huge amount of work, and getting it right is going to be very hard. Not convinced? Feel free to explore our functionality here.
Ready to rock and roll with Alooba Assess? Get started here.
The first step is admitting you have a problem, and so not every organization is ready for objective hiring. Alooba Assess is used by modern, forward-thinking organizations that recognize the need for change. They understand the biases in manual CV screening, traditional unstructured interviews and ‘cultural fit’ sessions, and have made a conscious decision to make their hiring process data-informed and objective.
If you’re not ready and committed to creating an objective and fair hiring process, we are probably not the right partner for you.
We also laid out a clear use case above of when to stick with your DIY take-home project rather than move on to Alooba Assess.
How can you accurately assess somebody's technical skills, like the same way across the board, right? We had devised a Tableau-based assessment. So it wasn't like a past/fail. It was kind of like, hey, what do they send us? Did they understand the data or the values that they're showing accurate? Where we'd say, hey, here's the credentials to access the data set. And it just wasn't really a scalable way to assess technical - just administering it, all of it was manual, but the whole process sucked!
Cole Brickley, Avicado (Director Data Science & Business Intelligence)