@paradigm
It’s better to give someone a good question and send them on their way than a good answer and be stuck with them forever.
Nothing here yet.
Nothing here yet.
Great walkthrough on custom data structures. I have a different take on criteria. Efficiency should always be subservient to effectiveness. Effectiveness is doing the right thing, and in the context of coding an analytic solution that means optimization for the least efficient component, which is always wetware. Brian Kernighan nails the two key points If optimization has become a concern, suspect a poor choice of algorithm. (See also Donald Knuth’s cautions against premature optimization.) Everyone knows that it is twice as hard to debug as it is to code, so if you code as cleverly as you are capable, you will never be able to debug your code.
I’ll hazard a top-of-the-head response to reflect mind share, after mentioning two essential find-the-tool helpers. CRAN Taskviews give a curated view of leading packages by domain. rseek.org is an R-tuned Google front end. The fpp3 suite of time series tools (successor to forecast()) lubridate for simple date time parsing sf for gis rms, the regression companion to Hmisc gt for table formatting when I’m not using xtable RMariaDB for SQL nortest for Anderson-Darling and other normality tests here for path reference sanity RGraphviz to make simple graph objects Rcpp, reticulate and RJulia My house rules: never use a data frame where an array will do; never use a tibble, tidy NSE, purrr, and remember the magic of linear algebra