A weeder task is a super simple programming task that should be second nature. Some options for different stacks:
JavaScript - use array functions to turn the input into a given output (2-3 lines of code)
React-specific - pass data from an input field to a text field in separate components (5-10 lines of code, we provide the rest)
Python - various list, dict, or set comprehension tasks
Rust - something with iterators or traits
Basically, we just want to know if they can write basic code in the position we’re hiring for.
I only vaguely know what you mean by “We were hoping to say they needed to write some tests to get a code review”.
The “programming challenge” isn’t really a challenge for programming skill, but more of a software engineering challenge to see how they turn vague requirements into a product the company could ship.
Let’s say the task is to build a CLI store, where the user inputs items and quantities they want to buy, and the app updates inventory after a sale. For the sake of time, we’ll say data doesn’t need to persist between runs.
I think any developer could build something like that in about 15-20 min, maybe less if they’re familiar with that kind of task. In Python, it’s basically an input() loop that queries an “inventory” dictionary and updates an “order” dictionary.
However, there are also a bunch of corner cases:
user inputs invalid item
insufficient inventory
invalid quantity
add the same item twice (not an error, maybe a warning?)
what if user decides to abandon the purchase and start over?
if we make it concurrent (i.e. a server with multiple users), how do we ensure inventory is correct?
After they build the basic solution, we ask them an open ended question: how confident are you that the code is correct? They wrote it in 20 min or so, therefore the confidence should be pretty low. I’ll then ask what they’d do to get more confidence.
A good software engineer doesn’t only want to ensure the happy path works, but that the corner cases are handled and those uses will continue to work regardless of what other developers add in the future. So I’m looking for one or all of:
peer review of the code
unit tests
documentation
If they say it’s perfect and to ship it, that’s concerning, especially if I identified an issue in the process of them solving it. We identified the same issue twice for that candidate that relied on AI, and they still said “ship it,” and we also noticed other issues as well that we didn’t tell them about.
So we’re looking both for competence and self awareness. Know your stuff, but also recognize your limitations. Meeting the requirements is only half of development IMO, you also need to maintain it long term.
That makes perfect sense. Thanks for the detailed reply. I think one of the reasons I feel like I’m slower than I want to be is I tend to think a lot about those kinds of edge cases. My main problem now is learning to find the right-size for prototyping/building.
That said, I’ve written thousands of loops at this point but I’ve only done an input loop like that in python once or twice (in classes as I recall), so that specific method of getting the application started would probably be in that “I’d be embarrassed I’d need to google that” category. But I think once I got started I’d code out a decently competent prototype of a basic store (I’ve built an ecommerce store before so I’m familiar with some but not all of those edge cases). I would never think that code would be ready to ship though.
A weeder task is a super simple programming task that should be second nature. Some options for different stacks:
Basically, we just want to know if they can write basic code in the position we’re hiring for.
The “programming challenge” isn’t really a challenge for programming skill, but more of a software engineering challenge to see how they turn vague requirements into a product the company could ship.
Let’s say the task is to build a CLI store, where the user inputs items and quantities they want to buy, and the app updates inventory after a sale. For the sake of time, we’ll say data doesn’t need to persist between runs.
I think any developer could build something like that in about 15-20 min, maybe less if they’re familiar with that kind of task. In Python, it’s basically an
input()
loop that queries an “inventory” dictionary and updates an “order” dictionary.However, there are also a bunch of corner cases:
After they build the basic solution, we ask them an open ended question: how confident are you that the code is correct? They wrote it in 20 min or so, therefore the confidence should be pretty low. I’ll then ask what they’d do to get more confidence.
A good software engineer doesn’t only want to ensure the happy path works, but that the corner cases are handled and those uses will continue to work regardless of what other developers add in the future. So I’m looking for one or all of:
If they say it’s perfect and to ship it, that’s concerning, especially if I identified an issue in the process of them solving it. We identified the same issue twice for that candidate that relied on AI, and they still said “ship it,” and we also noticed other issues as well that we didn’t tell them about.
So we’re looking both for competence and self awareness. Know your stuff, but also recognize your limitations. Meeting the requirements is only half of development IMO, you also need to maintain it long term.
That makes perfect sense. Thanks for the detailed reply. I think one of the reasons I feel like I’m slower than I want to be is I tend to think a lot about those kinds of edge cases. My main problem now is learning to find the right-size for prototyping/building.
That said, I’ve written thousands of loops at this point but I’ve only done an input loop like that in python once or twice (in classes as I recall), so that specific method of getting the application started would probably be in that “I’d be embarrassed I’d need to google that” category. But I think once I got started I’d code out a decently competent prototype of a basic store (I’ve built an ecommerce store before so I’m familiar with some but not all of those edge cases). I would never think that code would be ready to ship though.