We know how valuable free-response questions are. Your candidates will have a chance to shine by demonstrating the depth of their knowledge to impress you even more with their own unique answers. While Alooba has a good set of predefined Free Response questions, opening up this feature for you to be able to add your own questions allows you to ask candidates to provide an in-depth response to something strictly relevant to you.
We have added the capability for our clients to filter the candidates displayed in the assessment results by a number of criteria, including status, total score, a score of any of the skills, submitted on date and non-anonymous candidate information. We have also added an "incomplete" status in the status filter to flag those candidates who started any part of the assessment but haven't yet submitted.
We have renamed the acronym referring to "Customised Multiple Choice Quiz" (MCQ) to "Concepts & Knowledge" on the create assessment page as we feel this better describes the purpose of this test.
We have now split our platform into our 3 core products. This way our customers and especially those who use multiple products can easily navigate through their assessments. 1.Alooba Assess is our data and analytics product for job candidates assessment. 2.Alooba Junior is Alooba’s solution for clients having to screen a large number of candidates at once. This is best suited for graduate or internship programmes. 3.Alooba Growth, is our dedicated data literacy assessment programme to assess your internal team or organisation and scale data literacy within your company.
It is now possible to track when candidates close the assessment window and we have also included a "ping" event to ensure that candidates are still connected when not interacting with the assessment.
Previously we weren't able to send SMSs to US phone numbers due to specific requirements for sending SMSs within US. With this new change, we can now send SMS messages to US numbers for Multi-Factor Authentication (MFA) and candidate assessment reminders.
Simplified our OTP message to reduce the length and make the message clearer.
We were previously calculating how long candidates spend per question based on the time difference between when they first saw the question and when they last saw the question, but this doesn't work very well for candidates that go back and look at questions they already answered. We have changed this implementation to use the test event tracking to figure out exactly how long candidates spend on each question properly. Note: This only affects new records.
One of the clear feedback from our user testing of the assessment page was that many users didn't understand what each of the rows within the "Candidate Sentiment section " really meant. We now show the full original question shown to users as a tooltip whenever someone hovers over the row name within the "Candidate Sentiment" visualization.
Based on user testing observations, some users didn’t realize that the table was horizontally scrollable as the horizontal scroll is not immediately obvious on the dynamic table with the current styling. We updated the horizontal scroll and added a vertical border on the fixed total score column.
We made the navigation between questions in the test (and on the assessment settings pages) user-friendlier by replacing the previous/next buttons with a horizontal scroll.
After a user was being deleted it wasn't fully removing the old records and when trying to re-add a user with the same email address an error was being thrown. We now properly soft delete users and exclude the soft-deleted values when checking the email address for adding users.
We added a buffer to revoked tokens so that users don't get kicked out whenever the refresh token response doesn't get processed.
We have separated the services used for sending critical messages (such as MFA OTPs) and non-critical messages (such as candidate reminders). So if there are any issues with the non-critical message sending it doesn't affect the critical message sending.
Resolved an issue where some candidates were getting an error message when trying to execute their SQL queries.
Some of the assessment results table queries were timing out. We have optimized the query to prevent similar issues.
We were getting an error after removing the candidate benchmark of an assessment as the 'passing score' was tied to the benchmark candidate score. We now set the passing score to the Alooba Benchmark when the benchmark candidate is removed.
Whenever loading the results tab of the assessment page we were doing two separate requests to fetch the top candidates and the candidate results table. For some reason, the top candidates' query was slower than the candidate table query. We have changed it to just reuse the results of the first results table query for the top results.
The API endpoints which returned users' information included more information than it really needed to. We have reduced the information returned by these APIs to just that which makes sense for the user to have access to.
We changed the way we fetch the group names on the dashboard to ensure that we display the group names properly for all users.
Fixed the time per question displayed on the create assessment page for the SQL test.
Whenever a candidate included a comment in the SQL query, the query would execute, however, it was getting marked as incorrect, even if the query is correct.