Hacker Newsnew | past | comments | ask | show | jobs | submit | dooferlad's commentslogin

Grid stress in the first world isn't a big problem since electric vehicles can charge when the grid is under less pressure. The demand curve over a day has a reasonably predictable shape and EV charging can often be done when wholesale prices are low. Of course we will need to increase capacity, but that can be done over time.

Fossil power to generate electricity that then powers your car is less carbon intensive than just using an internal combustion engine. The overhead of getting oil out of the ground, refining it and transporting it is big. You then lose lots of energy as heat as you run your engine.


What Netflix wants to be popular also tramples over what I am already watching. I find that the UI gets in the way of enjoying several programs that I am part way through by pushing something new. If they want to continue to make series and only get me to watch half of the episodes then the algorithm is spot on.

That said, Amazon Prime video is so much worse. It suggests that I would like to watch series 1 of a show when I have watched it and am part way through season 2.


To their defense - their catalog in some countries (like mine) is so small that I was able to just ... browse through the whole offer in 10 minutes.

I've managed to binge-watch everything I liked in two months (while working full time remotely) and cancel Prime.

With such a small catalog suggestions will mostly be wrong, as there isn't enough content to fill the whole suggestions tab :)


Which makes you wonder why they even bother with suggestions in the first place, instead of doing the obvious thing: giving you a searchable list of all movies they have for your region.

Given that the former is a major engineering project, while the latter is a junior-level interview question, one has to assume they're trying to confuse their users on purpose.


Nah, it's presumably because they just lift and shift the US code to whatever new region they're in, without actually investing any thought into it.

This is a recurrent problem with US based services, as residents of the USA tend to forget that they are a very large country.


iirc amazon video owns their own recommendations distinct from the rest of the company, where previously recommendations were generated from the standard retail systems. I'm convinced that they got worse when this change was made but this is purely anecdotal.


My solution was to attach a little board from https://www.particle.io/ to a doorbell ringer to detect when the doorbell rings, which pokes a port on a PC in my office that is always on anyway. The PC uses ffmpeg to play a sound.


Oh, I think it is a quite reasonable comment. It isn't interesting work being done in the benchmark and the benchmark itself is short.

If you want to get ultimate performance from Python then write a C function...

If you want to inform me about runtime performance then show me how the language runtimes are spending cycles. If you wish to convince me about a language being great then tell me about the engineering effort to create and then run something in production.


It's a 2 sentence post, the latter of which seems to be trying to dispel the idea that it's about what you're implying it is. Uninteresting to experts in the area that consider this common knowledge maybe, not a failure of the author to do something worthwhile though.


(microelectronics graduate in 2001, turned software engineer, I have worked for ARM and Imagination Technologies on simulations of CPUs, GPUs, networking products and other bits of SoCs)

Yield wouldn't be that bad - ARM CPUs are small in comparison to their x86 cousins. I would expect that you would get to 32 CPUs the same way everyone else does - build a 32 core design and blow enable fuses on bad CPUs, then badge as appropriate.

To get a desktop class GPU they will need to address memory bandwidth. I am assuming they are using a tile based rendering process so memory bandwidth is less of an issue in comparison to scan line rendering (do desktop GPUs still do that?) I would assume that they are enjoying the benefits of the system level cache with the CPU and GPU sharing. I would expect there to be some careful benchmarking of increasing GPU cores, memory speed, memory size and system cache size going on at Apple.

There isn't anything stopping Apple from supporting external GPUs, but it would require a new SoC with more PCIe lanes. External buses are much more power hungry and take up space on the die. I don't have a mental model of how much space plonking a PCIe x16 interconnect would cost in terms of area or power (taking into account you need to get that data across the chip too), but my gut reaction is that it isn't likely unless there is a customer use case they can't solve on chip.


The Infinite Game is also great. The focus is on building companies that last hundreds of years, so you need leadership, culture and investors that are all about long term stability over keeping sort term investors happy. There are some rants, but nothing I disagree with!


Indeed. The writer doesn't even challenge some of the more obvious problems with the comments they have collected e.g. self driving cars causing traffic jams as they look for parking - why would they look? Wouldn't they ask for a space, reserve one based on location, price, charging needs, then drive to it on a route that is less used than ones carrying time sensitive loads (food, people)?


The rest of the SoC plays a huge part, just like the architecture of a PC does. My personal mantra when thinking of how to get the most out of a CPU is "feed the beast" since idle time is the killer. In the most simplistic terms your L1 caches need to be big enough to hold the program you are running and the data it is using. You can hint to the CPU about what data you will need and get it to preload and branch prediction does a similar job for fetching instructions and avoiding branch misses.

I for one like to see big caches in SoCs, but the cost of putting it there needs to be balanced against the performance requirements of the product (no point having a bigger chip than you need). There is also the numbers game of 64 bit / number of cores / RAM etc which are easy to parse but difficult to understand. An irrelevant number of consumers care about IPC and time taken to wake a core or switch a workload between cores, so great innovations like big little are used as marketing numbers rather than to tune a SoC to its best performance. I would like to see 1 big, 2 little cores and more cache myself.

So what do Apple have? Lots of cores? No. If you build an 8 core SoC and can't keep those cores running then you have listened to marketing.


SyncThing mostly works. Dropbox just doesn't support soft links. I have tried so many of these tools and SyncThing comes close to preventing me from reaching for rsync. SyncThing between Windows and Linux was painful the last time I tried though with all sorts of, you guessed it, soft link issues.

Official line from Dropbox is not to use soft links.


A NOP shouldn't ever take up actual pipeline space in a modern CPU - it can be discarded at decode.

A clock is a signal that is used as a valid signal for the data moving across a bus. That could be as short as between pipeline stages.

With async you have a different valid signal, which you will need to derive.

With aggressive dynamic frequency and voltage scaling, clock gating, power gating and having different power and clock islands you get a design that is very difficult to improve on. What AMD has done recently with circuits that lower voltage based on reading the environment on chip rather than sticking to a frequency: voltage mapping is a nice optimisation.

What async designs don't improve on are all the static power issues that are increasingly important.

It all adds up to being an interesting take on the problem of digital design, but not much more.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: