I will beat loudly on the "Attention is a reinvention of Kernel Smoothing" drum until it is common knowledge. It looks like Cosma Schalizi's fantastic website is down for now, so here's a archive link to his essential reading on this topic [0].
If you're interested in machine learning at all and not very strong regarding kernel methods I highly recommending taking a deep dive. Such a huge amount of ML can be framed through the lens of kernel methods (and things like Gaussian Processes will become much easier to understand).
Already see people saying GitLab is better: yes it is, but it also sucks in different ways.
After years of dealing with this (first Jenkins, then GitLab, then GitHub), my takeaway is:
* Write as much CI logic as possible in your own code. Does not really matter what you use (shell scripts, make, just, doit, mage, whatever) as long as it is proper, maintainable code.
* Invest time that your pipelines can run locally on a developer machine as well (as much as possible at least), otherwise testing/debugging pipelines becomes a nightmare.
* Avoid YAML as much as possible, period.
* Don't bind yourself to some fancy new VC-financed thing that will solve CI once and for all but needs to get monetized eventually (see: earthly, dagger, etc.)
* Always use your own runners, on-premise if possible
Really resonated with this, reminded me of the journey I went on over the course of my dev career. By the end, my advice for every manager was roughly:
* Don't add process just for the sake of it. Only add it if seriously needed.
* Require ownership all the way to prod and beyond, no matter the role. (Turns out people tend to really like that.)
* Stop making reactive decisions. If something bad happened on a total, extremely unlikely lark, don't act like it's going to happen again next week.
* Resist the urge to build walls between people/teams/departments. Instead, build a culture of collaboration (Hard and squishy and difficult to scale? Yup. Worth it? Absolutely.)
* Never forget your team is full of actual humans.
I started watching her videos out of curiosity, and then was really surprised to realize that dressing well is a skill like any other. And that it's not about spending a lot of money, but mostly about learning to pick colors and types of clothing that work well together and work for your specific skin tone and body type, and that are appropriate for the season and setting.
Some of the videos can be a bit clickbaity ("10 things you should NEVER wear!"), but she has a lot of videos that just cover the basics that are really genuinely valuable -- she clearly really cares about trying to help guys dress better. It's a lot of advice that I really wish I'd had someone teach me growing up.
1. For a Linux user, you can already build such a system yourself quite trivially by getting an FTP account, mounting it locally with curlftpfs, and then using SVN or CVS on the mounted filesystem. From Windows or Mac, this FTP account could be accessed through built-in software.
2. It doesn't actually replace a USB drive. Most people I know e-mail files to themselves or host them somewhere online to be able to perform presentations, but they still carry a USB drive in case there are connectivity problems. This does not solve the connectivity issue.
3. It does not seem very "viral" or income-generating. I know this is premature at this point, but without charging users for the service, is it reasonable to expect to make money off of this?
I've been using https://structurizr.com/ to automatically generate C4 diagrams from a model (rather than drawing them by hand). It works well with the approach for written documentation as proposed in https://arc42.org/. It's very easy to embed a C4 diagram into a markdown document.
The result is a set of documents and diagrams under version control that can be rendered using the structurizr documentation server (for interactive diagrams and indexed search).
I also use https://d2lang.com/ for declarative diagrams in addition to C4, e.g., sequence diagrams and https://adr.github.io/ for architectural decision records. These are also well integrated into structurizr.
Haugeland is GOFAI/cognitive science, not directly relevant to modern machine learning variety of models unless you are doing reinforcement learning or trees stuff (hey poker/chess/Go bots are pretty cool!). Russel and Norvig are the typical introductory textbooks for those. Marks and Haykins are all severely out of date (they have solid content, but they don't have the same scale of modern deep learning which has many emergent properties).
You are approaching this like an established natural sciences field where old classics = good. This is not true for ML. ML is developing and evolving quickly.
I suggest taking a look at Kevin Murphy's series for the foundational knowledge. Sutton and Barto for reinforcement learning. Mackay's learning algorithms and information theory book is also excellent.
Kochenderfer's ML series is also excellent if you like control theory and cybernetics
For applied deep learning texts beyond the basics, I recommend picking up some books/review papers on LLMs, Transformers, GANs. For classic NLP, Jurafsky is the go-to.
It’s something that’s pretty nice once you’ve implemented these lifecycles a bunch.
The tree turns out to be a great place to keep things, and you get several levels of control. Here’s how I think about it now.
`_init`: constructor. I usually use this to create dynamic child nodes
`_enter_tree`: now we have a parent; subscribe to signals, copy initial config, etc
`_ready`: I can be sure that all child nodes have called _ready, so anything I depend on there is… er, ready
`_exit_tree`: cleanup, especially anything we did with the parent and other ancestors. usually symmetric to `_enter_tree` if I’m using them
I just did a find all in my biggest recent project, and I never actually use `queue_free` or `free` for explicit memory management. Most of the times when I’ve thought I needed to, I actually was creating a bug
If you're interested in machine learning at all and not very strong regarding kernel methods I highly recommending taking a deep dive. Such a huge amount of ML can be framed through the lens of kernel methods (and things like Gaussian Processes will become much easier to understand).
0. https://web.archive.org/web/20250820184917/http://bactra.org...