Good question and there are many ways to solve the word counting problem! I didn't intend so much to solve it as to demonstrate hash tables.
As for a tidyverse solution, the thing I would watch out for — both in practice and in an interview — is the scalability and performance. As I linked in the previous comment thread, hash tables have nearly constant time needed per-element, regardless of the number of elements. Data frames, lists, and many other kinds of solutions require more and more time to look things up as the object gets larger.
For many applications the performance differences don't matter. However, in an interview situation, I would expect follow-up questions about the pros and cons of any particular solution.
For tidyverse, I might ask about the footprint size and the multi-year code stability of a solution used in production. (The goal of a question like that is to probe how much a candidate understands about the tradeoffs of code. We say a lot more about that kind of question in the book.)
FWIW, the least complex solution to the specific problem in R given the assumption of a "words" vector is: table(words).
However, for that answer, I would do exactly what my interview did at the time, and say, "OK, but assume you have to do this in base R and there is no table() [or tidyverse] function!"
Riccardo Melani
Hello and thanks for this amazing blog! Assuming we have that character vector (words) containing the words to count (which would need to be generated following some text mining I guess?), how about this in tidyverse: wordcounter <- data.frame(word= words) %>% count(word)