The GPL's conditions are triggered only by distribution. If you distribute modified code, or offer it as a networked service, you must make the source available under the same terms.
Offering as a networked service is not distribution. That was why they had to make AGPL to put conditions on use in networked services.
1. Emissions matter, not the particular fuel source they come from. Most places cannot meet 100% of their needs, or even 100% of their growth in needs, with renewables so they must use and even grow some fossil fuel sources.
India has vast coal reservers, and is the second largest producer in the world, whereas they aren't a major oil producer. Hence they use coal. Similar with China.
If the story was about some country shutting down their last natural gas plant instead of their last coal plant, no doubt someone would be pointing out that meanwhile the US is increasing natural gas production at a record pace.
In 2025 the US added 7 GW of natural gas electricity capacity, and India added 7 GW of coal. Natural gas generates about 1/2 the CO2 as coal, but India has over 4x the population, so the US added about twice as much new emissions per capita.
But we also need to consider how much renewables were added. That will be part of point #3.
2. India's emissions are 2 tons per year per capita, which is under 1/2 of the global average, which is about 1/3 the EU average, 1/5 of China, and 1/7 of the USA. Even if it takes them longer to get off fossil fuels than the other large countries they are likely to never come near the emissions levels per capita of those other countries.
3. They are actually making better progress at this than most others. 50% of electricity used in India is renewable, compared to 25% in the US, 40% in China, and 47% in the EU.
They are not just adding coal. They are adding wind and solar at record paces too. In 2025 they added around 7 GW of coal capacity last year, 38 GW of solar, and 6 GW of wind.
The US is doing the same, but with natural gas rather than coal. 7 GW of natural gas, 25 GW of solar, 13 GW of wind. About the same percentage of renewables (~90%) as India.
4. Yes, per capita is the correct measure, because the atmosphere is very efficient at distributing CO2 emitted anywhere to everywhere. A ton of CO2 has the same impact no matter where it is emitted. Unless you can make a good argument that some people have some sort of natural or divine right to a bigger share of whatever CO2 budget we decide Earth can afford, it has to be per capita.
> It turns out a lot of things we are told we need, we really don't. People lived without them as recently as a few years ago.
It also often turns out that when some new way comes along to do something that people like to do, the ways they used to do those things go away. If you don't like the new way you can't go back to how it used to be done.
The last physical media video rental store within a reasonable drive of me closed around 8 years ago. Redbox went away in 2024. There is still rental by mail, but that is slow.
Those who liked being able to be able to rent a movie without planning days ahead are stuck with streaming now.
Another example is cell phones. It used to be that there were pay phones all over the place. Nearly every public place had a payphone nearby. In most cities there was a good chance there was a street payphone on every block, and nearly every restaurant and gas station had one. On freeways there were call boxes to summon help.
Pay phones peaked in the US in 1995. When cell phones went mainstream in the early to mid 2000s, pay phones rapidly went away, and in about 10 years were almost all gone. Around 90% of freeway call boxes also disappeared. They now are mostly only in areas with poor cellular coverage.
If you want to be able to make calls while out and about now doing it the way it was done before cell phones quite likely is not feasible.
> Those who liked being able to be able to rent a movie without planning days ahead are stuck with streaming now.
Just want to point out that public libraries often have great DVD collections (also music, games, and more) and are often underutilized. Definitely still a viable way to watch a movie for many folks.
Perhaps this makes a very big difference to you, but I often have to remind myself that iTunes movie rentals are very much alive and function just as they do some ten years ago. No subscription required. Not physical, sure, but a normal rental experience.
it's interesting that if you want to watch a movie, torrenting is pretty much the same it was 20 years ago. at this point I torrent movies that are on Netflix (that I have a subscription for) simply because it gets me a better bitrate much more reliably.
While not exactly the same as freeway call boxes... pretty much every state requires any business that are listed on the food/gas/hotel/recreation signs for off ramps to have a free phone for public use.
The US is a net exporter of petroleum (crude oil plus refined products) but from what Google tells me it is still a net importer of crude oil. It also tells me 75% of what goes through Hormuz is crude.
Also, domestic crude of mostly light, sweet crude whereas many US refineries are designed to deal with heavy, sour crude. Google is telling me 80% of the crude that goes through Hormuz is heavy, sour crude.
Does any of this raise the impact disruptions of Hormuz would have on the US?
>Also, domestic crude of mostly light, sweet crude whereas many US refineries are designed to deal with heavy, sour crude. Google is telling me 80% of the crude that goes through Hormuz is heavy, sour crude.
The US has some of the best chemists in the world; light sweet crude is easy to refine but heavy sour crude is hard, so US refineries refining light sweet would be a waste of their talents - better to export it out for newbies to refine and buy the harder-to-refine and therefore cheaper heavy sour crude. But if heavy sour becomes more expensive, then the US will switch to the easymode option in a heartbeat.
An increased cost of inputs will always hurt the entire industry, but it won't particularly hurt the US any more than anyone else, and will probably hurt them the least - especially when they have plenty of domestic shale oil that will be financially viable to extract if prices go up.
I've not done serious networking stuff for over two decades, and never in as complex an environment as that in the article, so the networking part of the article went pretty much over my head.
What I want to do when running a Docker container on Mac is to be able to have the container have an IP address separate from the Mac's IP address that applications on the Mac see. No port mapping: if the container has a web server on port 80 I want to access it at container_ip:80, not 127.0.0.1:2000 or something that gets mapped to container port 80.
On Linux I'd just used Docker bridged networking and I believe that would work, but on Mac that just bridges to the Linux VM running under the hypervisor rather than to the Mac.
Is there some officially recommended and supported way to do this?
For a while I did it by running WireGuard on the Linux VM to tunnel between that and the Mac, with forwarding enabled on the Linux VM [1]. That worked great for quite a while, but then stopped and I could not figure out why. Then it worked again. Then it stopped.
I then switched to this [2] which also uses WireGuard but in a much more automated fashion. It worked for quite a while, but also then had some problems with Docker updates sometimes breaking it.
It would be great if Docker on Mac came with something like this built in.
(co-author of the article and Docker engineer here) I think WireGuard is a good foundation to build this kind of feature. Perhaps try the Tailscale extension for Docker Desktop which should take care of all the setup for you, see https://hub.docker.com/extensions/tailscale/docker-extension
BTW are you trying to avoid port mapping because ports are dynamic and not known in advance? If so you could try running the container with --net=host and in Docker Desktop Settings navigate to Resources / Network and Enable Host Networking. This will automatically set up tunnels when applications listen on a port in the container.
I'm basically using Docker on Mac as an alternative to VMWare Fusion with a much faster startup startup time and more flexible directory sharing.
I want to avoid port mapping because I already have things on the Mac using the ports that my things in the container are using.
I have a test environment that can run in a VM, container, or an actual machine like an RPi. It has copies of most of our live systems, with customer data removed. It is designed so that as much as possible things inside it run with the exact same configuration they do live. The web sites in then are on ports 80 and 443, MySQL/MariaDB is on 3306, and so on. Similarly, when I'm working on something that needs to access those services from outside the test system I want to as much as possible use the same configuration they will use when live, so they want to connect to those same port numbers.
Thus I need the test environment to have its own IP that the Mac can reach.
Or maybe not...I just remembered something from long ago. I wanted a simpler way to access things inside the firewall at work than using whatever crappy VPN we had, so I made a poor man's VPN with ssh. If I needed to access things on say port 80 and 3306 on host foo at work, I'd ssh to somewhere I could ssh to inside the firewall at work, setting that up to forward say local 10080 and 13306 to foo:80 and foo:3306. I'd add an /etc/hosts entry at foo giving it some unused address like 10.10.10.1. Then I'd use ipfw to set it up so that any attempt to connect to 10.10.10.1:80 or 10.10.10.1:3306 would get forwarded to 127.0.0.1:10080 or 127.0.0.1:13306, respectively. That worked great until Apple replaced ipfw with something else. By then we had a decent VPN for work and so I no longer need my poor man's VPN and didn't look into how to do this in whatever replaced ipfw.
Learning how to do that in whatever Apple now uses might be a nice approach. I'll have to look into that.
Hey I'm the maintainer docker-mac-net-connect. Just an update that the issues caused by the latest Docker desktop changes have been fixed in the latest version. The battle with Docker Desktop is a bit frustrating and likely ongoing, but fwiw I've made some improvements that catch these much earlier now via dependabot and integration tests.
As another commenter mentioned, Colima is a good alternative to Docker Desktop if you're looking. It doesn't expose container IPs either, but docker-mac-net-connect does support Colima ootb now.
I don't have a Mac environment, but I have researched a bit for devex purposes, and I would go with the Colima project as a open source solution for containers on mac. Have you tried it?
Note though that the court can award more than this in some circumstances. From the Copyright, Designs and Patents Act of 1988, section 97 [1]:
(2) The court may in an action for infringement of copyright having regard to all the circumstances, and in particular to—
(a)the flagrancy of the infringement, and
(b)any benefit accruing to the defendant by reason of the infringement, award such additional damages as the justice of the case may require.
I think most copyright systems have some provision for damages beyond lost profits, because if they did not what incentive would there be to not infringe?
You are off a bit on the numbers. First, though, the RIAA suits were not for downloading. The suits were for distribution.
Here is how their enforcement actions generally went.
1. They would initially send a letter asking for around $3 per song that was being shared, threatening to sue if not paid. This typically came to a total in the $2-3k range. There were a few where the initial request was for much more such as when the person was accused of an unusually high volume of intentional distribution. But for the vast majority of people who were running file sharing apps in order to get more music for themselves rather than because they wanted to distribute music it averaged in that $2-3k range.
2. If they could not come to an agreement and actually filed a lawsuit they would pick maybe 10-25 songs out of the list of songs the person was sharing (typically around a thousand) to actually sue over. The range of possible damages in such a suit is $750-30000 per work infringed, with the court (judge and jury) picking the amount [1].
NOTE: it is per "work infringed", not per infringement. The number of infringements will be one of the factors the court will consider when deciding where in that $750-30000 range to go.
3. There would be more settlement offers before the lawsuit actually went to trial. These would almost always be in the $200-300 per song range, which since the lawsuit was only over maybe a dozen or two of the thousand+ songs the person had been sharing usually came out to the same ballpark as the settlement offers before the suit was filed.
Almost everyone settled at that point, because they realized that (1) they had no realistic chance of winning, (2) they had no realistic chance of proving they were were an "innocent infringer", (3) minimal statutory damages then of $750/song x 10-15 songs was more than the settlement offer, and (4) on top of that they would have not only their attorney fees but in copyright suits the loser often has to pay the winner's attorney fees.
4. Less than a dozen cases actually reached trial, and most of those settled during the trial for the same reasons in the above paragraph that most people settled before trial. Those were in the $3-15k range with most being around $5k.
[1] If the defendant can prove they are in "innocent infringer", meaning they didn't know they were infringing and had no reason to know that, then the low end is lowered to $200. If the plaintiff can prove that the infringement was "willful", meaning the defendant knew it was infringement and deliberately did it, the high end is raised to $150k.
What I should have said is that all their lawsuits included an allegation of infringing the distribution right. There weren't any as far as I know that were just downloading.
I think you are correct for the overwhelming majority of cases for the US; The unauthorized reproduction/distribution is where things get very aggressive and easy to prosecute based on existing case law.
The only case that comes to mind as far as trying to threaten just for downloading, blew up in the law firm's faces... among other shenanigans, it came out their own machines were seeding files as an attempt to honeypot.
However other countries may have different laws as far as possession vs distribution and related penalties.
> NOTE: it is per "work infringed", not per infringement. The number of infringements will be one of the factors the court will consider when deciding where in that $750-30000 range to go.
But that's the whole problem, isn't it? Consider how a P2P network operates. There are N users with a copy of the song. From this we know that there have been at most N uploads, for N users, so the average user has uploaded 1 copy. Really slightly less than 1, since at least one of them had the original so there are N-1 uploads and N users and the average is (N-1)/N.
There could be some users who upload more copies than others, but that only makes it worse. If one user in three uploads three copies and the others upload none, the average is still one but now the median is zero -- pick a user at random and they more likely than not haven't actually distributed it at all.
Meanwhile the low end of the statutory damages amount is 750X the average, which is why the outcome feels absurd -- because it is.
Consider what happens if 750 users each upload one copy of a $1 song. The total actual damages are then $750, but the law would allow them to recover a minimum of $750 from each of them, i.e. the total actual damages across all users from each user. The law sometimes does things like that where you can go after any of the parties who participated in something and try to extract the entire amount, but it's not that common for obvious reasons and the way that usually works is that you can only do it once -- if you got the $750 from one user you can't then go to the next user and get another $750, all you should be able to do is make them split the bill. But copyright law is bananas.
> The total actual damages are then $750, but the law would allow them to recover a minimum of $750 from each of them
Because they're statutory damages, because the actual point of the exercise is to make an example of the person breaking the law. Obviously in scenarios where it's feasible to reliably prosecute a significant fraction of offenders then making an example of people isn't justifiable.
> Because they're statutory damages, because the actual point of the exercise is to make an example of the person breaking the law.
That's quite inaccurate. Punitive damages are typically treble damages, i.e. three times the actual amount, not 750 to 150,000 times the actual amount.
The actual point of statutory damages is that proving actual damages is hard, and then if you caught someone with a pirate printing press it's somewhat reasonable to guestimate they were personally making hundreds to thousands of copies. The problem is they then applied that to P2P networks and people who were on average making a single copy.
What is? My claim is that regardless of the exact wording the intent behind the law in this specific case is to make an example of violators. Do you dispute that? If so, on what basis? Because I believe the past several decades of results speak for themselves.
> The problem is they then applied that to P2P networks and people who were on average making a single copy.
A person retains a single copy for himself. However he does indeed actively participate in the creation of many other copies (potentially hundreds of thousands as you say). That sure sounds like the digital equivalent of a pirate printing press to me.
What you were describing was not P2P but rather the users of pirate streaming sites. And as we see rights holders don't generally pursue such people, preferring instead to only go after distributors.
I say all of this as someone who doesn't support current copyright law and sincerely has no objections to what Facebook did here.
The notion that statutory damages were intended to exceed actual damages by such an unreasonable factor on purpose (hundreds to hundreds of thousands, when the standard for punitive damages is 3) rather than the ridiculous result of applying a law written with one circumstance in mind to an entirely different circumstance.
> A person retains a single copy for himself. However he does indeed actively participate in the creation of many other copies (potentially hundreds of thousands as you say).
Many of the early P2P networks (and some of the current ones, especially for small to medium files) don't have more than one user participating in any given transfer. If you wanted to download something on Napster it would connect to one other person and download the entire file from them, with no other users being involved.
That is also what happens in practice in modern day even for the networks that try to download different parts of the same file from different people, because connections are now fast enough that as soon as you connect to one peer, you have the whole file. A 3MB MP3 transfers in ~30ms on a gigabit connection, meanwhile the round trip latency to a peer in another city is typically something like 100ms (even for fast connections, because latency is bounded by the speed of light). So it's common to connect to one peer and have the entire file before you can even complete a handshake with a second one, and rather implausible for a file of that size to involve more than a single digit number of peers. Hundreds of thousands would be fully preposterous. And then we're back to, the number of uploads divided by the number of users is ~1, so if the average transfer involves, say, four peers, the number of uploads the average peer will have participated in for that file will also be four. Not hundreds, much less hundreds of thousands.
Meanwhile you're back to the problem where splitting the files should be splitting the liability. If four people each upload 25% of one file to each of four other people, the total number of copies is four, not sixteen. If you want to pin all four on the first person then also pinning all four on the second person is double dipping.
Agreed that the magnitude of the penalty no longer matches the intent of the original law. But note that my original claim was not inaccurate. The point of the law here _is_ to make an example of people. Those two things aren't mutually exclusive.
I believe your claims about network speeds and peer count are largely inaccurate when it comes to torrents (and any other block based protocol that involves the equivalent of swarms) but I won't belabor it.
I'll also ask how you reasonably expect a court to go about performing the partial attributions you describe for data torrented from a large swarm. Like how would that even work in practice?
You make an interesting point about overall averages yet it seems to entirely miss the point of the law. Damages aren't reduced if I only illegally reproduce 25% of a book. A single chapter and the entire work are treated as equivalent here. It's the act and intent that the law is concerned with, not the extent (at least within reason).
The question is what color your bits are. Now how many of them you have or how many different people you obtained them from.
> The point of the law here _is_ to make an example of people. Those two things aren't mutually exclusive.
Whether they're mutually exclusive or not, I don't think that was even the point of the law. The point of statutory damages is supposed to be to address the problem of proving an exact amount of actual damages, by instead providing what was supposed to be a plausible estimate of them. But then they got applied in a context where the number hard-coded into the law is an exorbitant overestimate.
> I believe your claims about network speeds and peer count are largely inaccurate when it comes to torrents (and any other block based protocol that involves the equivalent of swarms) but I won't belabor it.
I'm pretty confident that's accurate for small files like a 3MB MP3. They literally do get fully transferred before the client has time to connect to a non-trivial number of peers. A lot of torrents use a 4MB chunk size, and even when the chunk size is smaller, you're still going to get multiple chunks from any given peer. Even with e.g. a 512kB chunk size a 3MB file has an upper limit of 6 peers, if you can even connect to that many before the first one has sent the whole file.
Large files could use more peers, but "hundreds of thousands" is still a crazy number. There are a non-trivial number of consumer junk routers that will outright crash if you try to open that number of simultaneous connections.
And I regularly use BitTorrent for Linux ISOs (I know it's a cliche but it's true), which are decently large files. The median number of connected peers when seeding really is zero, and the active number rarely exceeds 1, for anything that isn't a very recent release. Even if I leave the thing on indefinitely, until it's no longer a supported release and no one wants it anymore, on a connection with a gigabit upload, the average ratio will end up around 1. Because of course it is, because that's inherently the network-wide average.
> I'll also ask how you reasonably expect a court to go about performing the partial attributions you describe for data torrented from a large swarm. Like how would that even work in practice?
I mean this isn't really that hard, right? If getting the exact number for a specific person is unrealistic, we still know that total copies (and therefore total uploads) per user is ~1. So to do the normal punitive damages amount you take that number and multiply it by 3 instead of hundreds or hundreds of thousands.
> Damages aren't reduced if I only illegally reproduce 25% of a book. A single chapter and the entire work are treated as equivalent here.
But the entire work is being reproduced. The issue is that in the cases where it's a group effort, they're trying to double dip.
Suppose Alice, Bob, Carol and Dan work together to break into your shop like Ocean's 11 and steal four $1 cookies. They each get a cookie and you lost four. (Never mind whether you actually lost them or not.) If you only catch Carol, it's not always reasonable to put her on the hook for the entire amount instead of only her portion of it, but at least you could plausibly argue for it. But if you catch two of them, or all of them, expecting them to each pay the total for the whole group instead of collectively pay the total for the whole group is definitely unreasonable.
Congress has received plenty of negative feedback from their voters. The intensity and frequency of Republican voters confronting their representatives over many administration policies (e.g., Medicaid cuts, ACA subsidy cuts, tariffs, Epstein, influence of unelected officials) when those representatives hold in-person town halls has led to representatives greatly reducing in-person town halls, replacing them with tele-town halls so they can cut off people.
It won't be. Even if the house swings to the democrat side it will be a marginal swing only, not a massive change. Who knows if the Senate will even flip at all.
Half of America loves what's happening and the other half doesn't believe the first half loves it.
Offering as a networked service is not distribution. That was why they had to make AGPL to put conditions on use in networked services.
reply