In my mind all these productivity systems cook down to creating a list of tasks. The list can be between 4 and 10 items. You do one task at a time and when you are finished you can cross out the task. This is the extent of focus my brain is capable of. Creating giant backlogs and plans is just | /dev/null in my case.
A giant backlog of tasks can help with strategizing a long-lived project. But I classify them as planning material, and when doing, I just create a new list based on them.
Because of inline css and different fonts on different os-es and generally no guarantees about render size. In short, you have no idea how big a block is without rendering it.
That would be a good point, as the C in CSS should stand for complicated, but this recommended setting is "auto" (changed from the default of "visible"). Setting it to auto turns on layout containment, style containment, and paint containment for the element. It allows the browser to skip rendering but still keep the elements available to, e.g. tab order. It's like a hybrid of visible and hidden.
I think the answer to the GP's question is that the browser does know how to compute this before starting the render pass, but the additions to the spec that allow for placing fences around the layout- and style-related containment was too new for all browsers yet, so the default is to assume all rendering is done, and the HTML/CSS author can opt in to turning this on (which in early days may have had the risk of bugs in implementation or inconsistencies across browsers).
I remember the sysadmins trying to figure out why sqiud was so slow and tracing the system calls. It was because they did not use kqueue/epoll. When he asked the maintainer to improve on this he said squid was fast enough. This was the start of Varnish where one of main ideas was to use modern os-calls to speed up things.
I don't understand. You complain about coworkers getting high pay and complaining while not producing any value. Do you think a downturn and potentially getting fired will educate them in this matters and getting less pay will make them solve actual problems?
Yes, this is true as long as the method or function doesn't contain if's changing the behaviour depending on the data. In other words if the problem is so well defined that you can create a method that solves the problem and it doesn't need to take into account x variations of the problem then it is fine. This is the copy-paste versus creating a function discussion. Problem is the x variations of the problem and you need the code to do different things depending on the variation, we usually break the modularity of the function instead of separating the generic and non-generic parts. Hence the ifs in the function. From what I have seen people are unable to do this in their own codebase properly so I don't think it will happen globally. But on the other hand libraries are kind of the answer to the problem and as problems get well defined, one starts using libraries. Raising the level of abstraction is a continuous process.
At the end of the article: This is a clearly ambiguous merge and it simply wouldn’t be right for your version control system to try "resolving" it for you.
So the strategy is that if there is any doubt you have to manually fix conflicts. This is by design.
If you are used to being the guy everybody goes to it is a big change since nobody will depend on you if you are new. But that is fine, it takes som time to get recognised in a new work environment. If you are skilled it will change in 2-3 months.
By making fixed-function hardware, which may or may not run on hard-coded instructions per se. Think about a calculator to start with. There are lots of ways to make a computer that are not a Von Neumann architecture, which is what you’re thinking of. There are a bunch of examples of non-programmable computers here: https://en.wikipedia.org/wiki/Computer#History
> Computers are programmable. A system created from logic gates, executing a fixed set of operations is not called a computer. Your washing machine is not a computer either.
This definition excludes what was once the known totality of computers, namely mechanical and electronic targeting and firing computers and all analog computers.
Digital general purpose computers are programmable by the means of a set of discrete operations. However, this doesn't generalize to include all sorts of computers.
Where’s this quote from? Who said it? Using a washing machine as your counter example feels like a straw man. Is a calculator not a computer?
* edit just to add your quote is definitely wrong. A CPU is a set of logic gates that executes a fixed set of operations.
People have used the word “computer” for a lot of non-programmable machines over the years, and it’s clear in the Wikipedia article I linked to that programmability is a recent feature of computers, and was not always there. (Nor is it always there to this day.)
While I’m sure you can find examples of people that agree with you, that doesn’t invalidate history. A computer is anything that computes something, that’s how the word has been defined and used up to and including today. This includes the Antikythera (analog computer) and even (loosely) the Abacus (digital “computer”).
We're in a very weird transitional phase for a lot of this kind of thing. It's still largely a fixed set of operations, but it can be re-programmed and in theory isn't completely restricted to just those functions. Someone could port doom to it (I do actually want to see that one).
In the 1940s, there were computers, such as ENIAC, that were programmed with plugboards that changed the connections between modules that performed various mathematical operations.
If the modules in the Nimatron could be reconfigured to perform other operations, the Nimatron would, by the standards of the era, be considered an electronic computer, even if today we usually reserve that term for Von Neumann style machines where the program consists of symbols stored directly in the machine's memory.
The Online Etymology Dictionary states that the use of the term to mean "'calculating machine' (of any type) is from 1897.
The Online Etymology Dictionary indicates that the "modern use" of the term, to mean 'programmable digital electronic computer' dates from "1945 under this name
“Computer” remains today a word that means any computational device. Without any context, it’s safe to assume programmability, but that’s just a reasonable assumption and not a definition of the word computer. People are making fixed-function computers today, and @orbital-decay and I already gave examples of them here. I happen to work on fixed-function non-programmable hardware that is part of a widely used commodity processor today, a sub-core that does arbitrary amounts of computation without being instruction driven and can’t be used for general purpose computation.
I believe they mean that it was not RE-programmable, it was designed to do exactly one thing, and would require hardware changes to execute a different program.
We have hard-wired non-programmable computers in production today. GPUs have sub-components that do fixed-function computation and don’t run on instructions. I suspect this analog AI processor is not programmable: https://www.mythic-ai.com/product/m1076-analog-matrix-proces.... It seems likely to me that we’re about to see a lot of growth in specialized non-programmable hardware since chips have started hitting size & process limitations. Turning common workflows into fixed-function hardware is one of the lowest hanging fruits we have for increasing compute efficiency.
Over the years we have evolved the meaning of computer from something that calculates or reckons to a electronic device that runs software. In reality any black box that has a one-to-one relationship between inputs and outputs can be considered a computer.
It might be easier to think of mechanical computers, such as the WW2 fire control computers aboard Navy Ships[0] or more famously the Antikythera mechanism[1]. These are fixed devices, they compute values from inputs. The "program" is stored in the gears camshafts and differentials and ratios between them.
Similarly, fixed digital computers such as the Nimatron have their operations stored in relays and digital logic. These sorts of computers don't have a list of instructions. They just have schematics, inputs going through electronic circuits that wind up at outputs. You can do a lot of calculating with just simple logic gates.
>What has the world come to, where technology has been appropriated and we are left paying rents every month and companies are increasingly becoming user-hostile and predatory and monopolistic!
This is the issue. While the subscription model may be the right choice forward it may also be the catalyst of a collapse (like the dotcom bust) if the users get bullied enough.
Beware of the imposter syndrome. It never goes away. If your old code looks awful and embarrassing, you are on the right track (you are learning). Focus on learning to read code. It is more important than writing. Find a good open source project and read some good code.