The only technical interview loved by candidates, recruiters, and hiring managers

Find the right engineers and save time with assessments and interviews that are built on the most equitable evaluation practices in the industry.

Get A DemoHow It Works
banner-img

Quick Bytes

Dive into our latest insights and resources for teams navigating technical hiring. Plus, stay up to date on Byteboard news.

    Industry Trends & Research

    Work is changing, so should technical interviews


    Learn More

    With the abrupt changes to how we live and work, many individuals are unexpectedly finding themselves in a volatile job market. A job-search is always hard. However, for engineers in particular there is a unique barrier that leads to frustration and anxiety — the technical interview.

    In traditional technical interviews, candidates are often asked to solve problems covering theoretical concepts taught in college-level computer science classrooms e.g., sorting algorithms, search trees, etc. While great engineers know how to use these tools, most never directly implement them in their daily work, and certainly not from memory. Every modern coding language abstracts them away to allow engineers to focus on doing their job — building great products. As a result, preparing for these interviews can require months of studying — months that many candidates do not have, especially now.

    So, why does the industry continue using theoretical algorithms interviews?

    While building Byteboard, we've heard many arguments defending the use of these traditional interviews. They fall into three primary categories:

    • Everyone does it, i.e., "It's the status quo!"
    • It's convenient
    • Candidates should be prepared for theoretical questions

    It's the status quo!

    Theoretical algorithms questions are the standard across the industry. Most engineers experience them as candidates and then in turn use them as interviewers. There are now extensive interview guides on all the algorithms and data structures you should master for technical interviews, as well as a plethora of sites to practice on.

    As a result, the status quo has become one that privileges those with the time, resources, and insider knowledge on how to prepare. It would be one thing if these interviews were otherwise effective but research shows that traditional technical interview performance is pretty arbitrary and wildly inconsistent [1]. Additionally, the interview itself is so anxiety-inducing that we've talked to several engineers who do everything they can to avoid coding interviews altogether, staying at jobs longer than they otherwise would have.

    We love a good quote, and we love it even more when the words were spoken by an incredible (dare we say bad-ass?) female computer scientist:

    "The most damaging phrase in the language is ‘We've always done it this way." Grace Hopper

    It's convenient

    There is no shortage of example interview questions available online. Many companies adopt a model where each interviewer finds or creates an interview question in this format, which seems to scale.

    But, let's be real — being a great engineer does not always translate to being a great interviewer. And relatedly, the sheer availability of algorithms questions does not make the average question a good one. This is especially true when questions are unvetted or don't have structured rubrics to ensure consistent evaluation. At its best, this lack of quality control results in an inability to properly assess candidates for their skills; at its worst, it enables biased evaluation.

    Candidates should be prepared for theoretical questions

    Algorithms and data structures are important. And when you had to spend three months preparing to get your job, why shouldn't the next candidate? After all, spending time preparing demonstrates character. Right?

    Sounds oddly reminiscent of college hazing practices, wouldn't you say?

    Our task as interviewers is to assess if a candidate can do the job. And it's true — knowledge of algorithms and data structures IS important. But only to the extent that it is valuable in your daily work. The fact that many actively working engineers feel they need to spend over 100 hours studying for interviews indicates that we are going too far [2].

    The reality is that we've historically used this ability to prepare and do well on coding interview as a proxy for real engineering skills, but that does not make it a good proxy. In fact, it is one with fundamental inequities. Some candidates don't have time to prepare; others, especially those that are new to the industry, don't realize they need to. Some candidates assume their ability will be enough, while others study at universities that offer courses on "Mastering the Software Engineering Interview." Preparedness only tells us one thing — the candidate knew what was going to be on the test and had time to study.

    Frustration with societal acceptance of the status quo has been the catalyst for nearly every major movement in history. Our frustration with technical interviews is why we started Byteboard. We wanted to create a technical interview that assesses real engineering skills, like writing clean code, trade-off analysis, and written communication. We also wanted to create an experience that limits candidate anxiety and evaluates fairly. An interview that (you guessed it!) does not require engineers to spend months preparing to do well.

    It's time for us to stop saying, ‘We've always done it this way.' Our world, our lives, and our work are changing, so should technical interviews.

    [1] https://blog.interviewing.io/technical-interview-performance-is-kind-of-arbitrary-heres-the-data/

    [2] https://blog.interviewing.io/the-eng-hiring-bar-what-the-hell-is-it/

    Industry Trends & Research

    The playbook for hiring in the age of AI


    Learn More

    When it comes to software engineering, the rise of artificial intelligence will place greater—not lesser—emphasis upon the value of HUMAN intelligence. While tools like GitHub’s Copilot are awesome, they are only as good as the engineer using them. 

    And while many engineering leaders are discussing how AI is changing the nature of the work they do (AI-specific skills and tasks), teams often miss the two prerequisites for building excellent technical teams in the AI era: who those people are, and how they’re consistently hired and onboarded into a company.

    In our work at Byteboard, supporting technical and hiring teams at world class companies like Lyft, Figma, and Webflow, we have seen three key aspects in an emerging playbook to hiring and building in the AI era that can benefit all technical teams:

    1. How engineers work: tasks
    2. How you hire: processes
    3. Who you hire: roles

     

    1. How engineers work: adapting tasks to a new reality

    The skills demanded of technical teams are undergoing a fundamental shift. As technical hires spend less time coding and more time scrutinizing AI-assisted code, engineering leaders need to recalibrate their criteria for hiring.

    The acceleration in code generation provided by AI tools necessitates a heavier focus on code review. Engineers spend more time deciphering and refining AI-generated code, making attention-to-detail a critical skill. While coding skills remain crucial, the ability to analyze, review, and enhance code becomes equally vital.

    In the AI era, organizations should prioritize hiring engineers adept at navigating the complexities of AI-assisted workflows. The emphasis shifts towards individuals who can seamlessly integrate AI tools into their processes, ensuring the produced code is not just expedited but also of the highest quality.

    KEY ACTION: Dig deeper on how AI has shifted how engineers work, and what engineering leaders must do about it

    2. How you hire: disrupting the legacy process for technical hiring

    In the AI era, the process of finding, validating, and hiring technical talent has been disrupted at every stage. From top-of-funnel searching to the technical hiring interview loop, organizations are grappling with the challenges posed by AI-assistance tools and the risk of potential cheating.

    Traditionally, coding challenges were the go-to method for assessing technical ability. However, these tasks align closely with the capabilities of AI tools, leading to a surge in anti-cheating measures. Organizations resort to in-person assessments, blocking internet access or IDE usage, and employing unfamiliar mediums like pen-and-paper or whiteboards. While intended to prevent cheating, these measures often hinder candidates from showcasing their true potential, introducing stress and unfamiliarity while increasing costs for organizations.

    An alternative approach involves injecting complexity into assessments. Real-world problems, rich in contextual nuances, present a more accurate measure of a candidate's capabilities. Questions that challenge AI tools due to their inherent messiness in complexity become a litmus test for genuine engineering proficiency.

    KEY ACTION: Lead a discussion between engineering and talent teams about your organization’s approach to applied AI skills. Arriving at consensus now will save you time and confusion in the long run.

    3. Who you hire: the roles that will build the future

    It’s not about just updating job descriptions. Roles within engineering teams are evolving beyond traditional boundaries and the process for hiring them must keep pace. It's no longer sufficient to focus solely on prompt engineering; technical talent must now possess the ability to seamlessly collaborate with AI tools throughout the entire product-building workflow.

    AI-assistance tools exhibit both strengths and limitations. Their remarkable speed in content generation empowers engineers to accomplish tasks more efficiently than ever. However, the Achilles' heel lies in trustworthiness. Identifying correct answers from those that merely appear correct demands a higher level of expertise, especially in complex problem-solving scenarios.

    In this changing landscape, the role of an engineer shifts. While AI tools expedite the 'how' of software engineering, engineers are liberated to delve into the 'what' and 'why' questions, focusing on product and systems design. The ability to review and rectify AI-generated code becomes a critical skill, emphasizing attention-to-detail over sheer productivity.

    Organizations must adapt their hiring strategies to align with this paradigm shift. Hiring engineers with the right skills becomes paramount – individuals capable of distinguishing flawed code from correct code, analyzing product spaces, and understanding organizational goals. As the pace of engineering work accelerates, the responsibility to consider the broader impact on users and the world becomes an integral aspect of the hiring process.

    KEY ACTION: Engineers using AI tools need to be strong at code review and attention to detail. Add assessments that focus on these important qualities in your 2024 hiring process.

    HOW TO BUILD AND HIRE IN THE AI ERA

    At Byteboard we are helping engineering leaders in this new era by incorporating complexity and ambiguity into our assessments that emulate real work—this includes collaborative use of AI tools. Rather than focusing solely on preventing cheating, the emphasis is on assessing candidates fairly in an environment where competency requirements are rapidly evolving, specifically in terms of producing code with generative AI and then reviewing it for quality and accuracy. That’s why our new applied AI assessment is designed to accurately measure candidates who leverage AI as a resource.

    For a deeper dive on how to put this into practice, here’s an Elevate 2023 talk I gave alongside Akash Jain, Senior Engineering Manager at Byteboard client, Webflow.

    Industry Trends & Research

    Technical hiring in 2024: Build with focus


    Learn More

    Technical leaders are facing an unprecedented confluence of challenges: business and market forces are putting pressure on us to build ever better products with greater velocity. And technological forces (yes, I’m especially talking about AI here) are completely disrupting the way we build those products. 

    What is a technical leader to do? We can’t solve what we don’t understand, so I wanted to help my fellow engineers and technical leaders see the perfect storm of challenges in front of us—and most helpfully, the way that these challenges can also be approached as clear opportunities.


    Note this isn’t just my opinion. In my job as founder, CEO, chief sales person, customer support, and many other roles at Byteboard –  I speak with technical leaders at world class engineering companies like Lyft, Figma, and Webflow.

    Here’s what we see coming, how technical leaders can turn these potential challenges into opportunities, and how we’ve evolved our product to help software teams make the biggest impact in this new reality.

    WHAT’S TOP OF MIND FOR TECHNICAL LEADERS: NOT JUST AI (BUT, YES, ALSO AI)

    1. Fewer resources: doing more with less

    The painful and widespread layoffs of 2023 saw many world class technology companies cutting deeply, impacting technical teams in unprecedented numbers. For the first time in the careers of many technical and talent leaders, it’s a buyer’s market for technical talent.

    But this market inversion brings many challenges along with its opportunities. For the very reason that the top of the funnel is easier to fill, it has also never been harder to sort, interview, and select the right candidates. With legacy interviewing processes, talent and hiring teams spend dozens of hours screening candidates and the more candidates are screened, the less time engineering teams have for building products that will win in a more competitive market.

    As the engineer market offers more talented candidates, the expectations from company leadership rise accordingly. Engineering leaders need to keep pace, hiring excellent talent, often with fewer resources of their own, which is hard given the increased pressure on their primary focus: building a product that still wins when customer budgets might be shrinking.

    Which leads me to the second factor for technical team success in 2024:

    2. Building with AI: disrupting the nature of technical work

    Teams need to achieve greater impact with fewer resources, and one of the critical ways engineering teams are achieving this is through AI-assistance tools like ChatGPT and Github Copilot. A recent Stack Overflow survey of 90,000 developers showed that half of developers are already using Github Copilot, and over 80% are using ChatGPT. While these tools allow engineers to produce code and content at an unprecedented speed, this efficiency comes with a caveat – the challenge of ensuring the trustworthiness of AI-generated work.

    AI-assistance tools, in their current state, serve as a speed hack for engineers, enabling them to jumpstart projects and codebases. Yet, the onus lies on the engineer to possess the expertise to identify and address the often subtle flaws within AI-generated outputs. 

    The integration of AI-assistance tools heralds a transformation in the way engineers approach their work and in 2024 engineering teams will have to incorporate AI deeply—but thoughtfully. Engineering leaders must solve for three key disruptions to the way their teams do work.

    First, engineers can accomplish certain aspects of their work more rapidly, allowing them to focus on broader product and systems design. The AI tools act as accelerators for answering the 'how' questions, enabling engineers to dedicate more time to the critical 'what' and 'why' aspects of software engineering.

    Second, the balance between code generation and code review is shifting. Engineers, now armed with AI-assistance, spend more time scrutinizing and rectifying AI-generated code rather than creating it from scratch. This shift underscores the increasing importance of attention-to-detail, emphasizing the need for engineers to understand and navigate subtle intricacies hidden within lines and lines of code.

    Third, organizations must be meticulous in hiring engineers with the right skills. The ability to discern between flawed and correct code is becoming a pivotal skill, coupled with the necessity for engineers who can contribute effectively to product and systems design. Moreover, teams are hiring for more senior levels than in the past, because, when AI-enabled, those staffers will be even more productive, faster.

    That leads us to our third factor for technical team success in 2024:

    3. Hiring for AI skills: finding talent and building teams for emerging skillsets

    With the advent of AI-assistance tools, the traditional methods of assessing technical skills in software engineering are undergoing a radical transformation. Coding challenges, once the cornerstone of technical assessments, now face challenges themselves in evaluating a candidate's abilities.

    The rise of AI introduces a new variable in the hiring equation. Questions designed for strict prompt-based coding challenges align closely with the capabilities of AI tools like ChatGPT. Organizations relying on such assessments must adopt stringent anti-cheating measures, often at the expense of the candidate's ability to showcase their true talent.

    Alternatively, the path forward involves injecting complexity into assessments. Real-world problems, laden with contextual nuances, become a formidable challenge for AI tools. This complexity mirrors the intricacies engineers encounter in their day-to-day work, making it a more accurate measure of a candidate's capability.


    THE BYTEBOARD PLATFORM: Building a new technical hiring process for a new era of building software

    In 2024 technical leaders need to refocus their limited time on building out their roadmap vs building out a team, and talent leaders need to reimagine hiring processes to be more time-efficient with even better outcomes. Sadly, some teams are now half or even a quarter of the size of what they were at the start of 2023, so every engineer—and engineering hour—counts more than ever before.


    For technical teams, the theme of 2024 is building with focus. For us this means giving engineering teams less distractions and higher quality results they can trust without a lot of the hoopla. 

    That’s why we’ve made the biggest-ever expansion to our product: Byteboard is now an all-in-one technical hiring platform. From staff-level hiring assessments to new assessment offerings, all the way to a comprehensive live coding for on-site interviews, we give technical leaders hundreds of hours back to build software, while reducing the risk of mis-hires and optimizing for candidates with real world skills for the current era. And talent leaders can now reinvent their hiring process to be faster and more effective while managing fewer tools, processes, and back-and-forths. 

    This expanded platform has already had an outsized impact on our early customers. Byteboard customer Webflow now reports that using Byteboard saves them over 120 hours per month—regaining the equivalent of nearly a full-time engineer. Byteboard helps you build great software. Both by hiring the right technical talent, faster, and by saving your engineering org hundreds of hours every quarter. Now, as an end-to-end platform, we’re able to support engineering orgs even better. Not only do we cover more languages and roles than ever before, we help you manage an efficient hiring process all the way through to on-site interviews that candidates love. In 2024 each step of the hiring loop will be impacted by AI skills and AI-assisted tools, and we’re excited to offer a technical hiring platform that is purpose-built for this new era of technical hiring.

    Interview Quality

    There's more to assessing code quality than automated correctness testing


    Learn More

    We know that hiring engineers can be a challenge. One of the key skills that technical interviews are meant to assess is an engineer’s ability to write high-quality code. But when technical assessments only measure one dimension of code quality, engineering recruiters and managers miss out on valuable insights of how an engineer would actually perform in a real coding environment. 

    The traditional way to evaluate a candidate’s programming skill is to give them a technical problem to solve, and then measure their performance through automated correctness testing, where the candidate’s solution to the problem is checked against a pre-written set of test cases that assess how correctly the code solves the stated problem.

    While this approach is easily scalable and allows for numerical comparisons between different candidates, there are several ways this strategy fails to evaluate critical engineering skills. 

    Limitation #1. Real-world engineering problems aren’t clear-cut

    Automated correctness testing places constraints on the types of questions that can be asked in interviews. When the interview needs to be evaluated by pre-written tests, the questions asked have to be well-defined problems with precise descriptions for what correct input and output should look like.

    But real-world engineering problems are often more complex and open-ended. They require the engineer to define ambiguous ideas precisely, to determine what the correct behavior should be, to question assumptions about whether the problem they’re solving is even the right one to achieve the project’s goals.

    Limitation #2. The computer isn’t the only audience that reads code

    Writing code, like many other forms of communication, is a task of communicating to multiple audiences. The computer is one audience and it’s important the code the computer operates on matches the engineer’s intent.  

    But the second audience, no less important than the first, is people: other engineers who might read the code, or even the same engineer who might look back on their own code in the future. For this audience, it matters how clearly the code’s intent is expressed, how easy the code is to test and maintain, and how well the code is abstracted and logically organized. Writing high-quality code means communicating with people as much as it means communicating with machines, and an effective assessment of engineering skill should take that into consideration.

    A correctness-only measure of code only captures how well the engineer has communicated their intentions to the computer, and misses out entirely on how well the engineer communicates their ideas and intentions to other engineers. 

    How we evaluate code quality at Byteboard 

    It’s for this reason that at Byteboard, we have human graders look critically at each candidate’s code and evaluate it along multiple dimensions of code quality. We consider how well the candidate follows the conventions and idioms of their preferred programming language. We consider how well-documented the candidate’s code is. And we consider the decisions the candidate makes about code clarity, from as small as how to name a variable to larger-scale decisions like how to structure code for a complex task.

    Automated correctness tests can only say whether or not a candidate’s code produces the expected result or not. In cases where the code does not produce the expected result, automated testing often struggles to understand why the code deviated from expectation.

    Maybe it’s because the candidate’s logical approach was entirely wrong. Maybe their approach was right, but they struggled to express their ideas in code. Or maybe a small syntax error prevented otherwise correct code from running, or they missed a key implementation detail, or there was an edge case they didn’t consider.

    Each of those cases suggests something different about the candidate and their strengths and weaknesses. But automated testing treats all of those cases the same: not passing. Byteboard’s approach allows us to take a strengths-based approach and consider what specific strengths the candidate demonstrated even if the code didn’t produce the expected result.

    These factors all contribute to Byteboard’s understanding of an engineer’s code quality: an understanding that’s broader than we would be able to determine via unit testing alone. Strong software engineers are expected to be able to communicate and collaborate effectively on code, and a multifaceted approach to thinking about code quality helps us to identify candidates that are highly capable in those skills.

    Industry Trends & Research

    Tips for unlocking the full power of your technical interview platform


    Learn More

    It’s a reoccurring misconception: Buy a new piece of software and all pain points will be solved. In reality, delivering breakthroughs in any organization requires a successful implementation process –– which includes designing training and building trust through ongoing support. Byteboard has several features that can be leveraged to maximize hiring success. To reap the full benefits of Byteboard, we recommend using these 3 tips for a successful implementation. 

    3 Best Practices for Adoption and Implementation of a Technical Interview Platform 

    1. Train both people and non-people teams on new software 

    It can be natural to focus your training and implementation on a single set of users, but in order to build trust amongst stakeholders, it is vital to cast a wide net. When mapping your internal change management and training plan, ask these questions to help you determine who needs training on the new platform in your organization.

    • Who will benefit from the implementation of the platform?
    • What jobs are being replaced by the platform?
    • What processes or activities will users need to perform on the platform?
    • How will different users experience the platform based on their level of technical knowledge? 

    Your change management and training plan will be dependent on the context of your organization. Often organizations will provide a detailed roadmap for onboarding and support from Customer Success teams; at Byteboard we facilitate training sessions specific to a subset of users including engineering hiring managers, recruiters, and recruiting coordinators.

    2. Integrate your tech stack 

    We use lots of tools in our daily work lives, and it can be overwhelming to context switch between platforms, disjointed workflows, and an overload of messages.

    Integrating your technical interview platform to your Applicant Tracking System (ATS), such as Greenhouse or Lever, messaging tools, and single sign-on (SSO) services can (and will) remove friction in adoption. The last thing you want is to preclude a team from using a solution because they can’t remember their password and login.

    The easier it is for data and users to flow between systems (reducing manual tasks), the more likely you are to see adoption and use of a new solution. We recommend spending time as part of your implementation plan to set up all value-add integrations. Integrations can help ensure consistent and connected data across your stack that will, in turn, allow your team to unlock insights on candidate performance and qualities.

    3. Re-align your interview process 

    Technical interviews don’t exist in a vacuum. You may find that parts of your interview process become duplicative once a hiring platform has been adopted. It is important to do an audit of what experiences and skills are assessed in each part of your interview process to ensure you are receiving signals on all required skills while avoiding duplicative interview steps (and insights from those interview steps). 

    A strong technical interview platform should offer your team fair, consistent, and predictive assessments aligned to relevant skills in your hiring. We recommend monitoring a platform’s skills library and ratings functionality for relevance as part of both implementation and continued usage. 

    Ratings provide recruiters and hiring managers with a snapshot of a candidate’s performance on an interview and overall skill set. While ratings can be incredibly insightful, they are a top-level indicator on the quality of a candidate’s technical interview performance.

    Software Engineering roles continue to evolve as new programming languages and frameworks become relevant to specialized technical roles. Without ongoing updates and adjustments to assessments and ratings methodology, you may find a mismatch between assessments and skills. 

    At Byteboard, we work with hiring teams whenever a new role opens to help guide you to the best suited assessment. We also custom calibrate ratings to your team’s expectations for a specific role and level.

    Interested in learning more about what makes team's who use Byteboard successful starting at implementation? Schedule a demo to learn more about how we can help deliver results in your technical hiring.

    Interview Quality

    How we make sure our technical interviews are actually fair (and how you can too)


    Learn More

    When it comes to interviewing and hiring tech talent, most modern recruiting teams are seeking to balance two goals: assessing candidates rigorously and effectively, and hiring a diverse team – in race, gender, and background. Many interview tools and candidate assessment platforms measure the first goal with a well-known metric: the onsite-to-offer ratio. But you might want to know – how are technical interview platforms measuring fairness in their technical interview processes?

    At Byteboard we’ve been tracking our diversity numbers carefully, using (among other tools) the gold standard of hiring fairness metrics: The Adverse Impact Ratio, also sometimes referred to as the Four-Fifths Rule. The Adverse Impact Ratio is an easy calculation; take the pass rate of an underrepresented group and divide it by the pass rate of the majority group. 

    For example if 20% of women are getting passing scores on your technical interview, and 40% of men are getting passing scores, the adverse impact ratio for women is 20%/40%, or 50%.

    The reason it’s often referred to as the Four-Fifths Rule is because the long-time standard for fairness in candidate selection – popularized and upheld by the U.S. Equal Employment Opportunity Commission (EEOC) – is an 80% (or four-fifths) adverse impact ratio across all protected groups. If you can achieve an 80% or higher ratio across all demographics, you can rest assured that your hiring process isn’t meaningfully biased against any of those groups.

    Unfortunately it’s an open secret in the tech industry that tech hiring’s diversity numbers are often dismal. In fact, it’s gotten so bad that some people even believe that the goals of rigor and diversity are in conflict – that in order to hire a diverse team, you have to “lower the bar” for both non-technical and technical interviews, so to speak. This is disappointing, because in our experience helping teams assess tech talent, we at Byteboard have found that the exact opposite is true: as we’ve focused our efforts on constantly improving the quality and rigor of our technical assessments to increase our clients’ onsite-to-offer ratios, we’ve also watched the fairness of our assessments climb.

    So far in 2023 Byteboard’s software engineering interviews have an adverse impact ratio above 80% for all races and genders. That means that teams who use Byteboard in their hiring process can be confident that they are helping to eliminate bias and increase fairness in the industry, while simultaneously increasing their own onsite-to-offer ratios.

    If you want a fair and equitable hiring process, the first step is to make sure you’re measuring fairness effectively. If you’re using external screening tools and interview providers, ask them to provide their adverse impact numbers. And if you’re keeping your candidate assessment in-house, it’s an easy calculation to do on your own. But whatever you do, don’t sweep fairness under the rug, and don’t assume that you have to sacrifice fairness for efficacy, or vice versa.

    Learn More 

    With over 20K+ candidates evaluated, Byteboard helps companies evaluate high-quality talent through our redesigned technical interview proven to reduce mishires and improve candidate satisfaction. 

    Schedule a demo to learn more about how we can help deliver results in your technical hiring.

    Interview Quality

    Three reasons why consistent context interviews make for better hiring


    Learn More

    While inconsistencies and surprise twists can make for a great Christopher Nolan film, talent teams and recruiters should cultivate an interview process with consistent context in order to better predict a candidate’s on the job performance. 

    Just think about the last interview plan you put together:

    • How many interviewers were part of the hiring panel? 
    • What type of technical interview questions or prompts were candidates required to answer?
    • What was the context of each technical interview? 

    A real work environment is built on consistent context, which is why a good interview process should reflect the same.

    For both behavioral and technical interviews, a consistent question or prompt context facilitates a candidate’s ability to think about and build on the same general problem space. Instead of one interviewer asking about the number of windows in Manhattan, another asking why potholes are round, and another on {insert random-and-disconnected-rote-style question here}, you can build familiarity with a candidate in a relevant context across multiple conversations. 

    Why does technical interview consistency matter, and how do you build a shared context into an interview process? Check out three reasons why prioritizing consistent context in interviews makes for better hiring. 

    Better Skills Evaluation

    In the vast majority of our daily work, we are building from an existing knowledge base and context rather than approaching a problem from scratch. While interviews are consolidated versions of work, asking about an entirely different problem space in each 30 or 60 minute conversation takes a real working environment to an unnecessary extreme. It’s like going from Barbie to Oppenheimer without time for an outfit change.

    Interviews with consistent context reflect the way that work happens. A candidate is given a question in a specific context, and then proceeds to build on their thinking in that same context with additional, related conversations with other individuals. Putting the candidate in a realistic environment gives you the best chance to evaluate their performance accurately and completely.

    Happier Candidates, Higher Offer Accept Rates

    Unfamiliar, start-from-scratch interviews haunt candidate nightmares. You may have even had a nightmare like this yourself: showing up to a test for a class you’ve never taken, hearing an interview question that you completely blank out on, and so forth. Presenting a candidate with different scenarios in each conversation is more likely to turn them off of your company than give you an accurate read on their skills. One recent Byteboard candidate summed this up well:

    In a typical interview, I get really nervous because I have no idea what to expect. That's extremely intimidating and rarely provides a great read on what to expect from a candidate on a daily basis.

    We’ve heard from thousands of candidates who value the opportunity to dive deeper into a topic area in the Byteboard Interview. With the interview experience often serving as a candidates’ primary exposure to a company pre-offer, consistent context interviews offer you the best chance to not only find, but ultimately hire the best candidate for your role.

    Dive Deep While Maintaining Trust

    The internet is rife with stories of ideas “stolen” during job interviews. No matter whether these situations are genuine larceny, innocent coincidences, or a mix of both, candidates are understandably hesitant to dive into the specifics of their ideas or plans for a role they are interviewing for prior to an offer. Since getting an accurate picture of a candidate’s skills requires a deep dive, this creates a challenge.

    Consistent context interviews offer a solution: select a context that is parallel to the problem area that the role focuses on without being an exact match. Candidates can share detailed ideas, thoughts, and reactions over the course of multiple interviews. You see more of the candidate’s thought process without fear that what they are sharing might later show up in a company press release.

    How do I build consistent context into my interview processes?

    Adapting an interview process to a consistent context in-house can be challenging. It often involves writing new interview questions, training interviewers, and even developing take-home challenges.

    Byteboard is an end-to-end technical interview platform that features a library of interviews that build on consistent context. Candidates familiarize themselves with the question context in an independent interview, including both a technical reasoning exercise and a code implementation section. You can pick up from that interview live, allowing candidates to elaborate on their work and learning more about their role fit.

    Bottom line? Your interviews should complement and build on each other. 

    Byteboard works with companies to ensure they are hiring the right candidates for their roles — both through interview consistency and measurement of the right skills. Interested in improving your technical hiring?

    Reach out to connect with an expert!

    Industry Trends & Research

    Byteboard’s highest-scoring technical skills in 2022 university recruiting


    Learn More

    What do Fortune’s 10 best companies to work for all have in common? They all have early talent programs. For many of these companies, interns serve as the primary pipeline for new college and career hiring. Zippia reports that 70% of interns are hired by the company with which they intern, indicating that evaluating how interns perform on-the-job is a vital metric for how successful the intern will be in a full-time capacity.

    As more Gen Z teens consider summer employment compared to Millennials, companies are more robustly investing in intern hiring and development. What does this mean for companies winding down summer 2023 internships and preparing for future intern hiring cycles? Well, the first step in ensuring you are hiring effectively is determining what skills interns need in order to be successful on your team.

    For university recruiters and program managers, it is important to work with your engineering counterparts to identify what skills are a must-have priority for hiring and what skills can be learned on-the-job as part of internships.

    Identifying Must-Have Skills

    What do you get when you have over a dozen check-box criteria an intern needs to meet before getting hired? A frustrated hiring manager and recruiting team. Interns are early in their careers which means they won’t always come with years of experience and all of the skills you might want. 

    As university recruitment practitioners, it’s important to work with hiring managers to develop a shared understanding on what’s required for an intern to be successful.  We recommend asking questions such as the following as part of your new hiring manager intake process:

    • What skills does an intern need to know coming into the internship program? 
    • What skills can an intern learn as part of our internship program? 

    For skills that are identified as required, it’s important to map out where in your hiring process you will receive a signal on the skill. You might map a skill to any of the following hiring process stages: application, resume review, technical screen, on-site interviews. Let’s say that you identify the ability to read code and written communication as required skills. Here’s how you might map these skills to your hiring process:

    • Application - Written communication
    • Resume Review - Interest in technology, Language specific experience
    • Technical Screen - Reading and writing code, Attention to detail
    • On-Site Interview (Panel Interviews) - Computer science fundamentals

    Last year Byteboard partnered with over a half dozen companies to evaluate over a thousand intern candidates. Each company was able to customize and prioritize testing for skills that were essential to the jobs they were hiring for, giving both companies and interns greater insight on whether they would be a good fit for the company.

    Find out what trends we learned about intern skills based on completion of Byteboard interviews and how you can use these learnings to inform future hiring and early talent development programs.

    Trends in Early Career Skills for Tomorrow’s Engineering Teams 

    Byteboard interviews assess intern candidates across over a dozen skills ranging from technical to non-technical; depending on your company's technical focus you may choose to prioritize a certain subset of skills when making hiring decisions. 

    We recommend companies focus most on skills that are not fungible or teachable within the course of a summer internship as you can supplement intern summer experience with programs to improve skills in other areas. At Byteboard, we took a look at the highest and lowest scoring skills across 1,000+ intern candidate who completed a Byteboard interview in 2022; here’s what we learned: 

    • Top three skills: testing, working within complex large systems, translating ideas to code
    • Bottom three skills: thrives in ambiguity, collaboration, resourcefulness 

    Let’s breakdown two of these skills and why they matter. 

    1. Translating Ideas to Code: Can an intern turn pseudocode into functioning code without guidance? While a technical intern's role goes beyond writing code (asking questions, knowing when to ask for feedback), writing quality code is a fundamental skill that is critical to ensuring an intern’s ability to contribute to real world work.
    2. Thrives in Ambiguity: Can an intern jump into tasks that are not well defined and navigate with little to no guidance or feedback? It is important as employers to take time to develop well-scoped projects with a clear start and end point. Given the compressed nature of internships, interns may not always have time to investigate all of a project's unknowns and produce a tangible work product.

    Using Interview Feedback to Inform Intern Experience

    University programs team’s can use insights on intern class skills to inform programming for internship programs. Here’s a few examples of what this might look like in practice: 

    • Intern Project or Host Assignments: Why not map interns to projects or hosts aligned to their strongest skills? University Programs team’s can use insights from interviews on candidate skills to place interns on projects aligned to their skills. 
    • Mentorship Programs: You can use insights on intern skills to inform mentorship pairings and match interns based on their skill development needs to a mentor at your organization who counts that skills as a strength. 
    • Personal Development Workshops: Insights into skills gaps can help inform development workshops included as part of your internship programming.

    Byteboard evaluates intern candidates across 20+ software engineering skills as they work through a time-boxed project that simulates real-life work. Our candidate skills reports can help companies identify and understand where an intern or intern class can be better supported throughout the course of an internship. 

    Read more about how our partners Lyft and Figma have been using Byteboard as part of university programs over the last few years. Interested in partnering to develop a university recruiting process that measures for the right skills? Let’s discuss.

    News

    More data, better insights: Byteboard’s rating engine is getting a glow-up


    Learn More

    Byteboard has always done things differently. Instead of recreating the figurative broken wheel that is traditional interviews with some embellishments, we set out to build an interview that was similar to the job, the first of its kind project-based interview that assesses for over 20+ domain relevant skills. More importantly, when it came to evaluating we knew we didn’t want to stack rank candidates against each other or give hiring managers a flat numerical score with no context.

    Byteboard candidates are evaluated against their own performance and their ability to meet the skill requirements set by the company. And today, we’re excited to be making further progress in giving more data and insights to our companies by launching enhanced ratings and skills reports. Byteboard reports to hiring managers will now have a 4-bucket rating that shares if a candidate is “Strong”, “Leaning Strong”, “Needs Improvement”, or “Poor”, along with a breakdown on how a candidate does across core skills and domain-specific skills.

    The more data, context, and nuance we can share about a candidate’s skill sets means better, more informed, and most importantly more fair hiring decisions. Our partners at Figma have seen over a 35% increase in offers to underrepresented hires and quality of hire.

    Byteboard skills reports will now also share summarized feedback you can share back with candidates that includes actionable areas of improvement as well as suggestions for areas to focus on any future round interviews. The objectiveness and detail in our ratings and reports lead companies to build a structured technical interview process that is consistent, efficient, and effective.

    Why the Byteboard interview is getting a glow-up

    The world is changing rapidly around us, especially with tools like ChatGPT and Github Copilot becoming more mainstream. Technical interviews are in need of a glow-up. Our new candidate ratings provide a more nuanced understanding of a candidate's performance. Those who receive a Leaning Strong rating on the Byteboard interview have demonstrated most of the core skills required for the role. While they show great potential, there may still be areas where further development is needed. On the other hand, candidates who receive a Needs Improvement recommendation have shown promise in certain aspects but require additional skill development to excel in their role.

    By getting a more granular rating, hiring managers can now further optimize their hiring process by deciding what level of skill development they’re able to support post-hire across a variety of hard and soft skills, including:

    How to get started with the updated ratings

    For our existing customers, the updated candidate ratings are available now, allowing you to immediately benefit from the added clarity and insights provided by the new ratings.

    If you are considering Byteboard for your technical hiring needs, now is the perfect time to experience our platform's comprehensive assessment capabilities. Reach out to our team to schedule a demo today and hear how we've helped companies like Figma, Webflow, and Lyft use Byteboard to hire across all levels and key technical roles while maintaining high candidate satisfaction ratings.

    But don't just take our word for it. Here are some glowing reviews from recent candidates who have taken a Byteboard interview:

    “Really fun exercise! I love how engaging and real world it was. I think a lot of the industry can learn from this experience. Thanks so much for the opportunity!”
    - Recent Byteboard Candidate

    “This was a fun interview to do. Much more refreshing than the current leetcode style of questions are usually given out.”
    - Recent Byteboard Candidate
    Interview Quality

    The secret to a good technical interview process


    Learn More

    If you’re an engineering manager or recruiter, you’ve likely experienced firsthand the shortcomings of a traditional technical interview. We’re talking algorithmic tests that ask candidates to work through array manipulation or match string patterns in artificial environments, where candidates cannot even look up syntax, compile their code, or accurately represent how they would work on the job—the very job they are being interviewed for.

    Engineering managers, recruiters, and even candidates are becoming increasingly dissatisfied with traditional technical interviews and the broken interview process. As a result, many companies are seeking alternatives to update their interview processes with tools like Byteboard that enable companies to more accurately and confidently evaluate their technical candidates through interviews that simulate real work.

    We'll share the impact of a poorly-designed interview process on candidates and teams and a checklist to help you assess if your process is meeting the bar.

    The ones that got away

    Here’s a heavy dose of truth serum: software engineering as a field has evolved but the way we interview for those skills has not. In a recent report of the State of Engineering Management in 2023, engineering leaders reported that maintaining high-performing teams was the number one challenge this year. Building high-performing teams starts with hiring the right people. And choosing the right hire comes with a lot of responsibility. Missing out on a candidate that would have made a great addition to your team, simply because your company’s technical interview involves theoretical algorithm questions and Leetcode tests, is crushing, to say the least. There is strong talent out there, but it takes a well-designed interview process to find, engage, evaluate, and ultimately close on those candidates.

    There’s limits to your traditional technical interviews

    Coding challenges and other traditional technical interview methods often overlook or undervalue essential skills that are critical for a candidate to succeed in their role. For example, coding challenges may only test a candidate's ability to write code, but fail to test their ability to collaborate with a team or their communication skills. Having knowledge of algorithms and data structures is undoubtedly important for engineers, but it's only valuable when put to use in their everyday work. Problem-solving skills are important for engineers to have and traditional technical interviews often fail to test for real-work problem solving.

    Traditional technical interviews often require months of dedicated study as well, which may not be feasible for many candidates, and can benefit those with the privilege of time, resources, and insider knowledge on how to prepare. This can impact underrepresented groups in particular, creating inequity in the process. Plus, interviewers may unconsciously or consciously favor candidates who have similar backgrounds or experiences as themselves, leading to a lack of diversity in the hiring process. In fact, some companies have reported losing as much as 89% of their URM candidates in their first round coding screen 💔.

    What simulating real work in interviews can help you uncover

    Assessments that include tasks like reading through design documents and adding comments, implementing new features in existing codebases, and making decisions in ambiguous situations where there is no one right answer are the best way to truly measure how a candidate will perform on the job because they simulate tasks engineers do on-the-job. These types of interviews test a candidate's ability to identify and solve real-world problems, work collaboratively with a team, and communicate their thought processes and solutions effectively. 

    Before you start to assess if your candidate’s pass your bar, make sure your interviewing process passes the bar of being a well-thought out and effective process.

    How does your interview process do across the following?

    • Your interview process looks for a full picture of a candidate's skills, not just familiarity with advanced data structures and algorithms.
    • You ask candidates to perform tasks that simulate potential engineering challenges they may face on the job. Don’t have candidates do actual work you are shipping to production until they are actually employees and getting paid for the work.
    • Provides a full picture of a candidate's skills, not just familiarity with advanced data structures and algorithms.
    • Asks candidates to perform tasks that simulate potential engineering challenges they may face on the job.
    • Provides a a structured and rigorous rubric system that is de-identified to eliminate unconscious bias, providing a fair assessment experience. 
    • Designed to recognize the diversity of talent and assess candidates from multiple angles, acknowledging that there are different ways for a candidate to be a strong fit for a role.
    • Each interview in your process is intentionally designed to extract unique signals. So you’re not doing multiple rounds of coding interviews, but instead using each interview to build on additional signals.

    You’re (not) on your own, kid

    How does your interview process do on the checklist? If you didn’t hit all checkboxes, you’re not on your own. 

    We have seen and even personally experienced the ineffectiveness, inequity, and inefficiency of the traditional technical interview process. Technical interviews need to simulate real work in order to accurately assess if a candidate can do the job. By building an equitable interview process that intentional to the skills you would see on the job, you not only increase the quality of hire, but also shorten your time-to-hire, increase candidate acceptance rates, and save time.

    Our focus at Byteboard has been to provide provides interviews that simulate real work, designed and reviewed by real engineers that measure more than 20 domain-related skills, enabling recruiters and hiring managers to make faster, smarter, and more equitable hiring decisions. When compared to in-person interviews and automated screeners, Byteboard meets all the requirements outlined above — in addition to being a scalable solution for high-volume hiring. We've helped companies like Figma, Webflow, and Lyft use Byteboard to hire across all levels and key technical roles and have the highest candidate satisfaction ratings for their interview process.

    Experience it for yourself by requesting a demo today.

    Industry Trends & Research

    How to hire for Data Analyst and Security Engineering roles more effectively


    Learn More

    As companies collect and store more data, and security concerns continue to rise, the demand for data analytics and security engineering roles has skyrocketed. Hiring for these positions, which require specialized skill sets that differ from more general software engineering roles, can be challenging. This is why traditional hiring methods often fail to effectively assess candidates' skills and experience for these specialized roles. In this guide, we'll share our best practices and insider tips for hiring data analysts and security engineers, so you can build a strong and successful team.

    Understanding the Unique Hiring Needs for Data Analysts and Security Engineers

    Data analysts and security engineers need specialized technical skills and experience to perform their roles effectively:

    • For data analysts, this includes proficiency in tools such as SQL and programming languages like R or Python. They should also have experience in data visualization, data modeling, and statistical analysis. 
    • On the other hand, security engineers need to have a deep understanding of security protocols, cryptography, and network security. They should also have experience with penetration testing and vulnerability assessments.

    Additionally, candidates for these roles need to have a strong ability to work with data:

    • Data analysts should be able to clean and process data, and work with large datasets. 
    • Security engineers need to be able to analyze and interpret large volumes of security data to identify patterns and potential threats.

    The Limitations of Traditional Coding Challenges for These Roles

    Traditional coding challenges, which are common in software engineering hiring processes, often fail to evaluate candidates' skills effectively for data analytics and security engineering roles. Coding challenges that require the implementation of algorithms or data structures may be irrelevant to the specific technical skills required for these roles. This can lead to candidates being evaluated on skills that are not relevant to the role, leading to a poor fit for the position.

    Another limitation of traditional coding challenges is that they don't always effectively measure a candidate's ability to work with data. For data analysts, the ability to work with data is essential, and coding challenges may not always capture this skill. Similarly, security engineers need to be able to work with large volumes of security data, and coding challenges may not accurately measure this ability.

    Project-Based Assessments: A Better Way to Evaluate Candidates

    Project-based assessments provide a more effective way to evaluate candidates for data analytics and security engineering roles. These assessments evaluate candidates on their ability to complete tasks that are relevant to the role, such as data cleaning or analyzing security data. This provides hiring managers with a more accurate evaluation of candidates' skills and experience, and can help ensure a better fit for the position.

    When designing project-based assessments, it's important to ensure that the tasks are relevant to the role. For example, a project-based assessment for a data analyst role might require candidates to clean and analyze a large dataset, while a project-based assessment for a security engineer role might require candidates to analyze and interpret security data to identify potential threats.

    Get the Complete Guide

    At Byteboard, we understand the challenges that hiring managers face when hiring data analysts and security engineers. That's why we've created two guides — one for each of these roles — to help hiring managers improve their hiring process. Our guides provide a comprehensive checklist of the key considerations when hiring for these roles, including required skills and experience, education and certifications, technical and analytical skills, and soft skills. By downloading our guides, hiring managers can ensure that they are hiring the right candidates for their data analytics and security engineering roles.

    Get the Data Analyst Guide

    Get the Security Engineering Guide

    Hiring for data analytics and security engineering roles requires a unique approach that goes beyond traditional coding challenges. By using project-based assessments and downloading our guides, hiring managers can improve their hiring process and identify top talent for their organizations. As the demand for these roles continues to rise, it's critical for hiring managers to have effective hiring processes in place to attract and retain the best talent.

    News

    Byteboard introduces real-world interviews for Data Analysis and Security Engineering


    Learn More

    In the past few months, the world of hiring has flipped on its head as we moved from a candidate-driven market to a company-driven market seemingly overnight. Job reqs that would get at most a few tens of applications in the first day of opening are now receiving upwards of 100s of applications and being closed to new applications within a day. While the number of engineering roles have contracted, the number of candidates have increased – posing a bigger problem for hiring managers and recruiters trying to find the best candidates for their role amongst a lot of noise. Inevitably companies end up losing out on great talent because of their interview process taking too long or being too irrelevant, particularly for senior talent.

    Companies like Figma, Webflow, and Postscript have adopted Byteboard’s project-based interviews to scale their hiring process, giving more opportunities to find great candidates faster while maintaining high expectations in quality and performance. The Byteboard method of interviewing has proven to work well across thousands of candidates, with a candidate feedback score of 4.46 / 5 and reduction of time to offer by 10 days on average for companies like Figma.

    Today we’re expanding our coverage to two of the highest volume roles after Software Engineering across the industry – Data Analysis and Security Engineering to enable top engineering teams to offer the Byteboard experience across most if not all of their open technical roles.

    How Byteboard Interviews for Data Analysis and Security Engineering work

    As companies collect and store more data, and security concerns continue to rise, the demand for Data Analysis and Security Engineering roles has skyrocketed. As of February of this year, 23% of the open jobs in Healthcare and 29% of the open jobs in Finance were for a Data Analyst position, the highest of any position. Additionally, the occupation of Security Engineers and Analysts is projected to grow by 35%, much faster than the average for all occupations as reported by the Bureau of Labor Statistics. Hiring for these positions, which require specialized skill sets that differ from more general software engineering roles, is challenging — this is why traditional hiring methods often fail. Byteboard’s skills-based methodology and scenario-style interviews lend themselves to be the best and most comprehensive approach to evaluating for the specialized skills required for these roles.

    Security Engineering Interview

    Security engineers are the experts at protecting a business’s important assets. They work on securing vulnerabilities in a company’s systems and protecting important digital assets from any potential threats.

    In the Byteboard Security Engineer Interview, candidates walk through a real-world scenario where they are asked to perform a security assessment of a new feature, write a script to analyze network logs, and perform a security code review. The result for hiring managers is a comprehensive report on a candidate’s performance on core skills including but not limited too:

    • Security technologies and tools: firewalls, antivirus software, intrusion detection systems, encryption, authentication, VPNs, etc.
    • Security standards and regulations: NIST, ISO, PCI-DSS, HIPAA, etc.

    along with bonus skills like:

    • Cloud computing
    • Machine learning
    • SQL
    • Cryptography

    Companies hiring for any of the security engineering specializations including Application Security, Offensive Security, and Defensive Security would benefit from offering the Byteboard Security Engineering Interview to all their candidates. 

    Data Analysis Interview

    Data Analysts specialize in the cleaning, processing, analysis, and visualization of datasets. They are not to be confused with Data Engineers and Architects, for which we have a different Byteboard Interview that focuses on building software architecture to support the transmission of data across software systems.

    In the Byteboard Data Analysis Interview, candidates walk through a scenario they would often encounter in the wild where they are asked to reason about database structures, write SQL queries, perform exploratory and confirmatory data analyses, and create visualizations and mini-reports. Beyond the Byteboard core strengths like attention to detail, technical reasoning, written communication, the core skills also include skills like:

    • Data cleaning and processing
    • Data analysis: EDA, CDA, data mining
    • Data visualization

    along with bonus skills like:

    • Statistical modeling
    • Regression analysis
    • Advanced statistics

    If only a glimpse, the comprehensiveness in the skills evaluated in Byteboard Interviews shown above previews a tedious building process our Assessment Development team takes on to ensure we only launch the highest-quality assessments that we trust will provide hiring managers the right signal to make effective and fair hiring decisions. With each new domain expansion, we take on a process that begins with the research and identification of core skills used day-to-day in the industry that ensures our interviews are as relevant to the expectations of the job in the industry today. The high-quality content is paired with our exceptional candidate platform experience that candidates can personalize and designed to model their real-world working environment where they can choose their editor and language of choice.

    The introduction of Byteboard Interviews for Data Analysis and Security Engineering — in addition to our existing stack of Software Engineering, Frontend Engineering, Mobile Engineering, Data Engineering, and Site Reliability Engineering — makes Byteboard the leading interview platform in providing high-quality assessments across the most roles, and without doubt a candidate favorite.

    “Honestly this was a fun and engaging assessment. The documentation section was extremely engaging and I feel that it gave me the opportunity to really engage with the content and the code section was not in any way obfuscating or inherently overly anxiety-inducing. It was a pleasure to take this assessment and a huge thank you to Newfront for selecting me for this.”
    - Recent Byteboard Candidate for Newfront

    You can see these new interviews in action on a demo at byteboard.dev/demo or join our incredible Assessment Development team next week as they share insights on the latest trends in hiring for Data Analysis and Security Engineering, and provide practical tips on using assessments to identify top talent. Sign up to receive an invitation here.

    Candidate Resources

    What to expect from the Byteboard Interview experience


    Learn More

    As an engineer applying for a job, you expect the technical interview to assess how likely you are to succeed in that job. Yet the traditional technical interview process, which primarily tests for understanding of overly theoretical concepts and focuses on memorization, is anxiety-inducing and burdensome — often benefiting those who have the time and resources to prepare, while creating a barrier for those who don’t. This is why Byteboard built an alternative: an online, project-based technical interview platform that candidates do on their own time — one that tests for practical application of skills and simulates what engineers actually do on the job (such as coding and working on a project from design to implementation).

    Unlike other technical interview platforms, Byteboard didn’t simply digitize the traditional process but rather redesigned it for fairer, more effective outcomes. We put our solution to the test as part of our internal hiring process. Here's the feedback we got from Brian — an engineer we hired using our platform — about the Byteboard interview experience.

    Q: What led you to joining Byteboard?
    A: Traditional technical interviews so often aren't aligned with the set of skills engineers use in practice, and I was interested in working to help change these practices. Byteboard interested me because of the focus it places on accurately and equitably assessing engineering skills.

    Q: Can you walk us through the project you were given during the Byteboard interview?
    A: The interview was divided into two parts. In the first part, I was given a document describing a plan for the design of a new software system. The document asked me to provide answers to some open questions about the system’s design and then make a recommendation for how best to implement a part of the system. Answering the questions involved considering both business needs and user needs, and it required evaluating trade-offs among implementation options.

    In the second part, I was given a codebase in a language of my choice with a partial implementation of that software system. This part of the interview required me to complete some software development tasks related to the codebase, ranging from writing the implementation of smaller functions to adding new features and solving more complex, open-ended problems.

    Q: How did you approach the Byteboard project-based interview? Did you have a specific methodology or process?
    A: For the first part of the interview, I started by reading the design document presented to me. I took some notes and left some comments on the document with my own questions and observations about what I read. Once I felt like I had a good understanding of the goals of the document and the design decisions to be made, I worked through the open questions. For some of the larger questions, I started by writing a quick outline of my ideas and then filled in additional details and explanations for my thought process. This part of the interview also asked me to make a recommendation by choosing from a few different options. In preparing my recommendation, I evaluated the options, considered their trade-offs, and decided on what I was going to recommend and how I was planning to justify it.

    The second part of the interview felt similar to working on a real-world software project. Because of that, the practices I normally engage in when developing software — familiarizing myself with a codebase, testing code, debugging techniques — were all useful when working on the interview. I was able to work with a language and set of software tools I was already comfortable with, which meant I could focus my attention on understanding the tasks, planning how I was going to implement them, and writing code.

    Q: Did you encounter any challenges during the interview, and how did you overcome them?
    A: The Byteboard interview can be challenging in many of the same ways that software engineering can be challenging. The interview requires making decisions among reasonable options that all have different trade-offs. It requires taking open-ended requirements and acting on them. And it requires thinking about how to implement a solution technically in a way that’s clean, logical, and efficient.
    One of my strategies in working through the interview was to consistently communicate my thinking: I documented my thought process in notes I left in the design document and comments I included in my code. If there was anything I wanted to add that I didn’t have time for in the interview, I made a note of that too.

    Q: Did you have to work with others during the interview, and can you describe your collaboration process?
    A: The interview itself is completed independently, so I didn’t work directly with other people, but several of the interview tasks emphasize collaboration and communication. I was answering others’ questions, writing questions for others, and reading others’ code.

    Q: How did you feel about using Byteboard overall?
    A: I enjoyed the process of taking the Byteboard interview. I found that it let me engage in multiple skills and kinds of thinking, from algorithmic analysis to higher-level systems design. The interview gave me a chance to demonstrate my strengths, and it did so in a way that felt authentic to real software engineering practice.

    Industry Trends & Research

    Hiring engineers in the age of AI and ChatGPT


    Learn More

    AI-assistance tools will inevitably change the way engineers work; in some cases, they already have. In the right hands, they can be used to create efficiencies and troubleshoot problems. But, an inexperienced engineer could just as easily use them to introduce serious flaws into a codebase, made even more dangerous by the fact that AI code often looks right at first glance.

    At Byteboard, we’ve been thinking a lot about how tools like ChatGPT and Github Copilot impact the role of engineers, how we think they’ll change technical interviews, and how we can adapt our assessments to the mainstream usage of these tools.

    How AI impacts the future of software engineering

    AI-assisted tools are no doubt about to play a major role in the future of software engineering. In the short term, ChatGPT, as well as more specialized tools like Github Copilot, have demonstrated clear strengths as well as limitations. Its primary strength is the speed at which it can generate content. It can write dozens of reasonable-sounding sentences (or lines of code) in a matter of seconds, when the equivalent content might take a human minutes or hours to create.

    But its primary limitation is its trustworthiness. Both in prose and in code, it can often produce correct answers, and just as often produce answers that only *appear* correct, but are significantly flawed upon inspection. In specialized fields like software engineering, it can take significant expertise to differentiate between the two. As the complexity of the problem increases, so does the frequency of ChatGPT’s mistakes, as well as the level of expertise it takes to recognize them.

    Because of this, ChatGPT is currently most useful as a speed hack. Rather than starting with a blank slate, a software engineer can start by asking ChatGPT to solve their problem for them. It will then generate a significant amount of content far more quickly than the engineer could have written on their own. But if the problem includes any meaningful complexity, then in order to produce code that *works as intended* (or prose that is truly accurate), the software engineer has to take the AI-generated content and apply significant engineering expertise to correct ChatGPT’s (often well-hidden) flaws.

    In other words, a poor software engineer can use ChatGPT to quickly produce software systems that appear well-built, but contain significant flaws. But a strong software engineer can use ChatGPT to quickly produce software systems that are well-built.

    In the long term, we think AI-assistance tools will become more trustworthy, and more capable of producing correct work within contexts of increasing complexity. But it is unlikely that the fundamental ideas here will change; the more advanced the specialization, the longer it will take before AI tools can be trusted to produce correct work, and the more subject-matter expertise it will take for a human to be able to recognize and fix the AI-generated flaws.

    How the role of “engineer” changes

    Adoption of new tools and workflows always takes time (particularly so in specialized industries), so the coming AI-assistance revolution will happen over the course of the next few years, not the next few months or weeks. Only a minority of software engineers have integrated AI-assistance tools into their workflows, so most engineering work has continued exactly as it did before the introduction of these tools.

    That being said, we do think there are three primary ways AI will change engineering for the organizations using it.

    • Engineers will be able to accomplish some aspects of their work much faster.Engineers will have more time to answer the “what” and “why” questions of software engineering, while the AI-assistance tools accelerate the answers to the “how” questions. This means that the role of the engineer shifts towards thinking about product and systems design, though they will still be required to retain their technical skills in order to fix the flaws of the AI-generated code.
    • Engineers will spend more time on code review than on code generation. Since the AI-assistance tools generate copious amounts of code with hidden flaws, engineers who are making use of them will spend more of their time carefully reading the code that the AI generates, and less time writing code themselves. This means that attention-to-detail has become a very important skill, while sheer human productivity becomes less important.
    • Organizations will have to take more care to hire engineers with the right skills.Since AI-assistance tools can generate code that “looks correct” to the untrained eye, it will be all the more important that organizations hire engineers who can tell the difference between flawed code and correct code. Additionally, since the role of the engineer will shift towards product and systems design, organizations will need to hire engineers who can effectively analyze the product space and the organization’s goals. And finally, since the pace – and, ultimately, the reach – of engineering work will accelerate, it is of great importance that organizations select engineers who can be trusted to carefully consider the impact their work will have on customers, users, vulnerable populations, and the rest of the world.

    How AI-assistance tools change assessments

    The introduction of AI-assistance tools has introduced a new variable in hiring for software engineers. Until now, in order to assess a candidate’s technical ability, organizations have relied heavily on coding challenges: exercises in which a candidate is asked to write an isolated, complex algorithm in response to a clearly-defined prompt. We didn’t think those were great anyway—engineers don’t work in a vacuum, and no problem they’d see in their day-to-day work would have such clear requirements. But now, we have another reason not to like them: these problems are exactly the sort of tasks that ChatGPT is able to easily solve on its own. ChatGPT performs very well on tasks with strictly-defined prompts, clear boundaries, and singular solutions.

    As such, we expect organizations that use questions like that to embark on serious anti-cheating measures, like ​​only assessing candidates in in-person settings; blocking candidates’ access to the internet or their own IDEs; and requiring candidates to write their code using unfamiliar and restrictive mediums like pen-and-paper or whiteboards. These measures not only severely limit the candidate’s ability to showcase their talent by introducing stress and unfamiliarity and cutting them off from the tools they would use on the job; they also increase costs for the organization.

    The other path forward is to introduce complexity into assessments. What makes real-world applications hard for ChatGPT is that nearly all real-world problems contain a particularly messy sort of complexity – the complexity that comes from context.

    “Find the shortest palindrome in a given string” is easy for ChatGPT. “Given our existing codebase, revise our song recommendation algorithm to increase exploration and engagement for new users without upsetting power users too much” is hard for ChatGPT.

    To us, the real issue with asking engineers to solve problems that are easy for ChatGPT is not that it makes it easy for engineers to “cheat” by using ChatGPT. The real issue is that being able to answer those sorts of questions is not what makes someone a good software engineer.

    A good software engineer is someone who can perform real-world tasks, not someone who can write complex algorithms in isolation. ChatGPT is just accelerating our collective understanding of what was already true – algorithmic coding challenges aren’t really good assessments of the expertise and skills required to be a software engineer.

    At Byteboard, we’re facing this new challenge by continuing to add complexity and ambiguity to the coding tasks for our SWE assessments, thinking through what skills are becoming more (or less) necessary in the AI-assisted age, and considering a variety of mechanisms to more deeply assess how well a candidate understands the goals and context of a question. We’re also looking into other anti-plagiarism tools, but our goal is not to simply become a cheating prevention service – it is to assess candidates fairly and effectively in a role where what competency means is rapidly changing.

    We aim to design assessments that make it impossible to successfully “cheat” with AI tools, because performing well requires candidates to engage specialized skills in tasks with real-world complexity. Eventually, we expect AI assistance to become like Google – a resource that everyone is expected to use to do their job most effectively. Our tools are built with that future in mind.

    Interview Quality

    Six intersectional approaches to improving your hiring process


    Learn More

    Through hundreds of market research interviews over the past few years, we know that most technology companies want to build diverse, inclusive teams where everyone can thrive. With the move to remote work, that desire hasn't changed—and while other factors seem to demand major shifts in hiring processes, the reality is that the same best practices pre-pandemic are still the best practices a team can adopt while hiring remotely.

    Our research shows that there are a few vital actions that teams can take to move the needle towards better hiring outcomes:

    1. Audit job listings for realistic minimum qualifications (MQs) and inclusive language
    2. Use structured rubrics for all interviews
    3. Track pass-through rates and candidate experience
    4. Reduce potential for bias with anonymization
    5. Interrogate sourcing practices
    6. Understand why Diversity, Equity, and Inclusion (DE&I) and intersectionality matter

    Audit job listings for realistic minimum qualifications (MQs) and inclusive language

    Studies show that women are significantly less likely than men to apply to a job where they do not meet all the listed ‟minimum qualifications,” or MQs. Similarly, using exclusionary language in job descriptions can significantly impact the number of women and people of color applying to a given job.

    • Limit your MQs to the true minimum for the role. You might really want to hire someone proficient in 3+ programming languages, with 5+ years of experience, and ML experience. But would you hire someone with a deep ML background but only 1 programming language? If you would hire someone without that qualification in the right circumstances, remove it from the MQs.
    • Remove gendered words. Common culprits in engineering listings are "hacker," "rockstar," or "crush code." It's equally important to remove gendered adjectives or descriptors (such as "assertive" or "nurturing") in favor of unbiased alternatives (eg. "dedicated" or "conscientious"). There are several online tools that can help you identify gendered language in your job listings.

    Use structured rubrics for all interviews

    A quote from a recent meta-study noted, "In the 80-year history of published research on employment interviewing... few conclusions have been more widely supported than the idea that structuring the interview enhances reliability and validity."1

    Why is structure important? By defining upfront what skills matter for a role (those MQs!) and sticking to questions that evaluate for those skills against objective criteria, you are reducing the likelihood of interviewers falling back on gut decisions. Using structured evaluation criteria will also help you to debunk the false narratives that can accompany hiring efforts with emphasis on increasing diversity, such as the fear of "lowering the bar." Structured rubrics and detailed training on how to evaluate against them ensure that your hiring criteria will be consistently and fairly applied across applicants.

    Building structured rubrics takes time and effort. One way to start is by outlining the goals of each interview in your loop and identifying several questions you want the interviewer to be able to answer about the candidate by the end.

    Once you know what you are looking for in a candidate, you can define discrete buckets they might fall into. For example, when describing the trade-offs between several options in an architecture interview, a candidate might be extremely systematic in addressing all the considerations or concerns, or perhaps they might address some considerations while omitting others, or they might fail to make a convincing case altogether. These buckets can become the underpinning of your rubrics.

    After your rubrics are laid out, the last step is to define the true baseline needed for a given skill (MQs!). Maybe you are hiring for a role that must have extremely strong system design skills, in which case a weaker system design score—regardless of their coding skills—should result in a "no hire" decision. For other roles, system design might be teachable or "nice to have."

    Track pass-through rates and candidate experience

    It's impossible to know how your company is doing if you don't track hiring metrics. Start measuring pass-through rates (PTRs) at all parts of your funnel, and also collect feedback from candidates to determine if your process is improving over time. This will allow you to adapt your processes as needed. Although tracking PTRs across demographics can be an organizational challenge, it will result in much better hiring outcomes. These PTRs will empower you to identify parts of your process that need to change because they disproportionately impact candidate from historically underrepresented groups.

    Reduce potential for bias with anonymization

    Unconscious-bias training is crucial, but can only go so far. Find places in your hiring process where a candidate's personally identifying information (PII includes name, email, etc.) is shared when it doesn't have to be, and try to limit access. This ensures that interviewers, hiring managers, and other decision makers don't lean on conscious biases (like what school a candidate attended) or unconscious biases. Most Applicant Tracking Systems (ATSes) allow you to anonymize candidate information from various viewpoints.

    Train participants and interviewers to discuss candidates using gender neutral language.

    For example, screeners could review resumes with name, school and other PII removed. Take home interviews can be anonymized before review. Hiring managers can review interviewer feedback without associating it back to a candidate until after they've made a hire/no hire decision. If your company hires by committee, train participants and interviewers to discuss candidates using gender neutral language.

    Interrogate sourcing practices

    It's not uncommon for companies to blame homogeneous candidate pools on the "pipeline." The numbers might lead some to think this is true—among young computer science and engineering graduates with bachelor's or advanced degrees, 8% are Hispanic/Latinx and 6% are Black. But according to company diversity reporters, tech workers at Google, Microsoft, Facebook and Twitter are on average only 3% Hispanic/Latinx, and 1% Black,2 which is a much lower percentage than what the pipeline has to offer. Blaming the pipeline ignores real, systemic issues in hiring and retention.3

    It is just as important to source intentionally at the manager and executive levels as it is at entry level or early-career.

    For example, companies tend to rely on referrals for 48% of their hiring needs,4 but because of the ways that personal and professional networks are built, referrals are notoriously likely to result in new hires that look and think like your current team.

    Similarly, focusing active recruiting exclusively on "top computer science schools" can result in similar trends, since hires from this pool will in turn refer classmates from the same training backgrounds. Worse still, this leaves great candidates from institutions serving historically underrepresented groups in tech (community colleges, state schools, HSIs, HBCUs, women's colleges, and bootcamps) out of your pipeline.

    But if referrals and top-school recruiting is out, how do you find candidates to join your team?

    • Elevate your company as a great place to work. Connect authentically with communities by openly discussing how your team works on interesting and unique challenges, the ways that you strive towards an inclusive workplace, and how you retain your talent.
    • Draw from diverse sources by recruiting at a broader range of institutions, leveraging virtual affinity conferences and events, and exploring alternative talent sources beyond referrals.
    • Focus on diversity at every level. It is just as important to source intentionally at the manager and executive levels as it is at entry level or early-career. It’s much harder to convince talent to join your team if they don't see growth potential at your company.

    Why Diversity, Equity, and Inclusion (DE&I) and intersectionality matter

    Not all leaders recognize or understand the business and moral imperative to build diverse teams. But research shows that diverse teams make better decisions,5 drive higher profit,6 and build superior products.

    Additionally, some leaders may choose to emphasize only one lens on diversity (such as gender or political ideology) in order to focus their efforts. However, focusing on a singular lens of diversity creates new problems by worsening the gap for non-prioritized groups. For example, a team focused only on hiring more women might end up hiring mostly white women, which may improve the gender gap while actually widening the racial gap.

    For this reason, it is crucial that everyone at your company—from your Head of HR to your Head of Engineering, to every recruiter, interviewer and hiring manager—understands the importance of Diversity, Equity & Inclusion work (DE&I) and taking an intersectional approach that values all dimensions of identity (ethnicity, gender, sexual orientation, age, culture, disability, nationality or geography, socio-economic status, and experience). This work cannot be undertaken by individual champions of diversity alone; when your entire company understands the clear business need for inclusive hiring, it will become clear that DE&I and long-term improvements to processes and outcomes must be prioritized.

    Diverse teams make better decisions, drive higher profit, and build superior products.

    We don't know how long companies will continue to work remotely. Perhaps some teams will return to the office in 2021, while for others this change might be a permanent one. Regardless, these tips are just as important for remote teams as they are for in-person ones.

    Since the need to hire top talent won't ever be deprioritized, now is the perfect time to re-examine your hiring process and begin integrating these best practices at your company. The impact will be well worth the effort.

    Byteboard is a technical interviewing solution that applies structured, fully anonymized evaluation, and can assist you with tracking PTRs for your talent pool. If you are interested in learning more, request a demo.

    References

    1. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.218.9107&rep=rep1&type=pdf
    2. https://www.nytimes.com/2016/02/26/upshot/dont-blame-recruiting-pipeline-for-lack-of-diversity-in-tech.html
    3. https://www.newyorker.com/business/currency/why-cant-silicon-valley-solve-its-diversity-problem
    4. https://hbr.org/2019/05/recruiting?ab=hero-main-text#your-approach-to-hiring-is-all-wrong
    5. https://medium.com/awaken-blog/compilation-of-diversity-inclusion-business-case-research-data-62a471fc4a42
    6. https://www.mckinsey.com/business-functions/organization/our-insights/why-diversity-matters

    Further reading

    News

    Announcing the Byteboard Hiring Consortium!


    Learn More

    The past year has brought about a seismic shift in the talent market and the competition for technical talent is fiercer than ever. With the shift to remote hiring, candidates have their pick of companies they want to work for or rather, even apply for. Companies who offer a high-quality interview process that values candidate time and energy will attract and hire top talent. Byteboard developed the first product to revolutionize the technical interview. Today, we’re excited to take it a step further with the launch of the Byteboard Hiring Consortium. It’s time to finally revolutionize the technical interview process.

    Traditionally, candidates interviewing at multiple companies have endured a similar initial interview process with every company they apply to. No one wins in this scenario. Candidates spend hours demonstrating the same skills over and over again and eventually lose interest in going through the interview process all together. Companies miss out on great candidates that are stretched thin and, as a result, never get a chance to connect with them authentically as they often do at later stages in the interview process. Now, when a candidate interviews with a Byteboard Hiring Consortium member company, they can share their work rather than start from scratch.

    When interviewing with a member of the Byteboard Hiring Consortium, candidates that have recently interviewed with another member company have the option to share their raw interview materials immediately instead of taking another Byteboard interview. Each member company receives a candidate performance report customized to their hiring criteria. Companies no longer have to wait for candidates to take the interview, enabling them to reduce their time-to-hire by 7 days or more. The Byteboard Hiring Consortium allows companies to hire great talent faster while also offering a high-quality and streamlined candidate experience—making it a win-win for all.

    We are delighted to be launching the Consortium with 25 member companies, including our longtime partners Lyft and Betterment.

    Attracting top talent requires an interview process that candidates love. Byteboard centers candidate experience in everything we do, as is reflected in our average candidate feedback score of over 4.4 out of 5. The Byteboard Hiring Consortium furthers our commitment to creating a technical interview process that companies trust and candidates love. Providing high quality interview experiences has never been more important, and Byteboard Hiring Consortium membership will ensure that technical hiring teams are ready.

    Candidates interested in learning more about the companies using Byteboard can head over to ourJob Board.

    Companies interested in joining the Byteboard Hiring Consortium can sign up for a Byteboard demo here.

    Interview Quality

    What Byteboard’s technical interview does differently from other solutions – and why that matters


    Learn More

    We created Byteboard so engineering teams can hire more confidently and equitably. The industry status quo favors a small subset of engineers, from similar non-diverse backgrounds. And, the industry has a long way to go to change. In addition to being bad for candidates and bad for the industry, this paradigm leaves a lot of great talent on the table.

    Breaking the status quo requires doing things differently. At Byteboard, that means redesigning the interview so you get great signal on candidates, being thoughtful on how we assess open-ended interviews, and improving the candidate experience. Fewer mis-hires, more happy candidates. Here’s more on how we do it.

    Project-based interview

    How it’s different

    Our project-based interviews reflect the day-in-the-life of an engineer. The first exercise is a technical reasoning exercise in which candidates interact with a design document, reasoning through various implementation options, and providing a recommendation. The second exercise is coding implementation, in which candidates work with a rich, realistic codebase. The two exercises are cohesive and work with the same problems. This format emphasizes critical thinking over memorization, and allows us to capture complexities that aren’t possible with starting from scratch, Leet-code, or quiz-style questions. We are able to get signal on the skills candidates need to be successful in the role itself, like communication and collaboration, not just on the assessment.

    Why it’s better

    Automated screeners and interview outsources don’t give a full picture of what a candidate is capable of. They offer a patchwork of contrived exercises, and miss the opportunity to evaluate for skills we capture in the Byteboard interview such as systems reasoning, tradeoff analysis, and product sense. As a result, they over index on coding skill and cater more to those with more time to study, leading to a less equitable candidate experience. By using one realistic problem set, Byteboard assessments get a better signal on how the engineer will perform on the job.

    Human Review

    How it’s different

    Our candidate work samples are graded by practicing engineers and researchers. Their evaluation of the anonymized work sample is calibrated using specific questions that ensure grader consistency. They also provide short answer feedback on specific tasks and overall performance. Based on the grader responses and highly structured rubrics, the Byteboard team generates a skills report highlighting candidate performance. This report includes a skills map with both qualitative short answer feedback that highlights specific strengths and weaknesses of note within the work sample and an actionable recommendation on whether to move forward with the candidate. Byteboard produces a detailed skills report within two business days, so you get the convenience and time savings of full automation, with the rigor and nuance of human grading.

    Why it’s better

    We’re not anti-automation, necessarily, but at this stage, the technology can’t capture the nuance of candidate output. There are many ways to be a good software engineer. Automated engines miss all but the solution they’re trained on, giving only a pass/fail. Of course, the rigor of the rubrics matters, too. Interview outsourcing does have the potential to capture this nuance, but it depends on the standards that are set. Byteboard is transparent about how we come to our recommendation by showing grader feedback, the skills report, and the work sample itself. And, we regularly update our rubrics and grader calibration based on performance data.

    Bias Reduction

    How it’s different

    At every layer of the Byteboard experience, we work to reduce bias. The project-based structure reduces bias in two ways. First, allowing candidates to take the assessment on their own time reduces test anxiety and improves performance. Second, we anonymize candidate identity in grading, so each candidate is graded fairly.

    Why it’s better

    With Byteboard, you don’t have to give up anonymity to have human graders. Because the end result of a Byteboard assessment is a work sample, it can be easily anonymized for reviewers. No need for a resume, name, or even hiring company name. This gives you the richness of a human-reviewed assessment with the inclusion of an anonymous review.

    But, anonymizing the work samples is only the first step. Because there’s such limited transparency in the automated process, you can’t take for granted that they are inclusive in practice. Our research shows companies often rule out candidates for small mistakes or give false negatives. Byteboard’s graders also include short answers that expand on a candidate’s skills map and allow you to dive deeper into the work sample itself, drawing a clear line between the recommendation and the candidate’s work.

    Better Candidate Experience

    How it’s different

    Technical interviews are known as a necessary evil in an industry that is very competitive in acquiring talent. A better experience is a differentiator. Byteboard gives candidates the runway to show what they’re capable of, without studying theoretical questions or fumbling through disjointed problem sets. And, they can set the conditions for their own success — Byteboard lets them pick between their own IDE or cloud editor and select their preferred coding language. As an organization, you can show your future employees that you care about their experience and their time pre-Day 1. Candidates rate Byteboard 4.4 out of 5 stars.

    MUCH MUCH better than most of the other platforms: Codility, HackerRank, CoderPad, and Leetcode (to name a few). I honestly wish that every company used this. It made me feel like an engineer and not a pawn being quizzed to solve a theoretical riddle in one hour.

    Why it’s better

    Great candidates have a lot of options. Candidates will drop from the funnel if they’re frustrated with their assessment experience. This is common with automated screeners in particular. Automated screeners are built for the sole purpose of weeding out candidates, with little to no regard to candidate experience (or signal, for that matter). Byteboard helps keep qualified candidates on the hook with an engaging project similar to what they do in their job and the opportunities to show everything they bring to the table.

    Interview Quality

    There's a lot more to engineering than coding


    Learn More

    How Ezoic’s engineering manager grew the startup using Byteboard

    Ezoic is a rapidly growing AI-based web optimization startup based in Carlsbad, CA. We chatted with Ezoic’s engineering manager, Jason Bauer, about his experience with using Byteboard over the last year to grow their technical team.

    The time spent was the biggest problem Byteboard helped solve. The people that were doing our interviews are engineers. We don't have a dedicated hiring team. We don't have technical recruiters. So we were taking engineers away from their job, not just for the hours interviewing, but they've got to read the resume. They've got to prep. They've got to decide the question that they're going to be working on. It took a lot of time and interrupted the flow of their core work.

    We also realized that our traditional interview may have excluded people that we might otherwise want on our team. Not everybody is great at these algorithms type questions. But you could write a graphing tool to show publishers some data. That's what we're looking for. So we realized that we were probably disqualifying engineers that might have been great for us just because of the types of tests that we were doing.

    The Byteboard interview looks quite different from traditional technical interviews, as it simulates asynchronous engineering work by asking candidates to work through a technical design spec and coding implementation exercise.

    How have Ezoic candidates responded to taking the Byteboard interview as a part of your interview process?

    I've heard from a lot of candidates that they really like the two parts and that it's more real world—not just theoretical algorithm questions asked by most online assessments. I've heard that from dozens of candidates, even ones that don't necessarily do well. They still really liked the interview process and enjoyed doing it.

    Great editor and I loved, loved, loved that I was tested practically. I felt like I really got to showcase the kind of skill and efficiency that I ACTUALLY be using if I were to be hired for a position. Really cannot praise this style of interview enough. I felt comfortable and having the resources I would have in a real job scenario really made me feel like this was a great assessment of what I can do.
    Ezoic candidate

    Byteboard provides you with a structured report of the candidate's performance and a recommendation calibrated against the skills you're looking for.

    How have the recommendations and reports enabled your team to hire more confidently and efficiently?

    The recommendations made by Byteboard work really well. I really like seeing the breakdown of where the candidate is strong or not and I trust Byteboard to show all the right highlights. That saves me a ton of time, especially because I don't feel like I need to review the entire design document or the entire coding sample. I can get a really good summary and understand what the candidate did well and what they didn't through the Byteboard report.

    Whenever there is a candidate I do need to review further, I like the color coding breakdown of the skills. It allows me to easily see what skills that candidate is lacking in. Just last week we had a candidate who was weak on ‘attention to detail.' We ended up moving forward with the candidate, but that was something I was able to discuss with the team and make sure to address that skill gap in the final interview. We didn't have that skills breakdown with our previous interview system.

    Byteboard focuses on assessing for on-the-job software engineering skills. By having a more holistic interview process, how has your approach to hiring changed?

    It's interesting to see that some people do really well on the Byteboard coding portion, but poorly on the design exercise. Those are the people we probably would have moved forward with before using Byteboard because they are a great coder, but that doesn't necessarily mean that they would have been a great engineer since they might struggle with the design of some of the features. That's one of my favorite parts about Byteboard—is that you have both sides (the design and the coding) and assess the full range of engineering. Engineering is not just coding. There's also a lot more to it.

    By using Byteboard, Ezoic increased their onsite-to-offer rate from less than 20% to over 45%, while saving their engineers hundreds of hours to focus on engineering. Over 87% of Ezoic candidates rate their experience with Byteboard favorably.

    Ezoic has many open roles across engineering. Apply to their open positions here.