Hacker Newsnew | past | comments | ask | show | jobs | submit | svcphr's commentslogin

> "1. People have wildly incorrect intuitions about where land value is concentrated"

Fwiw this sort of land value gradient has been studied in economics for ages. See papers on monocentric city model, going back to Alonso (1964), Muth (1969), and Mills (1967). Or even further back, von Thünen was talking back in 1826 about how land values spike as you get closer to the marketplace.


I was waiting to read about what these "wildly incorrect intuitions" were, but it's never explained. The maps correctly matched my own intuitions.


Author here. Our blog generally concerns property tax reform for our regular readership which is admittedly less clear to a new reader coming in cold: the intuitions I’m referring to is the average homeowner kind of assumes any tax reform (such as shifting taxes off buildings and onto land) is designed to impoverish them personally. The purpose of these maps is to show such people where land value in cities is really concentrated - Ie, not the m the suburbs. Mono centric city value might be intuitive to academics, but it’s not among regular everyday people.


This is generally a big problem in Pittsburgh where huge areas of the most valuable land is owned by “nonprofits”


Do you mean people underestimate how steep the gradient is, or they don't know it at all?

It seems kind of dubious to me that "everyday" people don't understand that land in cities is worth more than land in suburbs. It seems very transparent that you get a smaller lot size for the same price.


Both. They do understand that it’s worth “more” in the city but they vastly underestimate the magnitude, and they vastly underestimate what that means in terms of where the total bulk of land value is concentrated, and therefore what the distribution of winners and losers will be in any tax shift scenario.


> I was waiting to read about what these "wildly incorrect intuitions" were, but it's never explained. The maps correctly matched my own intuitions.

If you are into land value tax discourse maybe, but from my experience at least there is a big lack of awareness of the impact of economic activities on land values as they are not reflected by anything that people get in contact with. That's especially true because neither rents nor property taxes (the one thing people might have exposure to) fully capture it.


I had guessed land in Manhattan vs the bronx as 7x more valuable, based on living in the Bronx and paying rent. For the joys of living in the Bronx my rent was under $1k and I had a separate bathroom that was part of my studio apartment. Meanwhile Manhattan apartments wanted $2k and I had to use a bathroom shared with the floor.


Same. My assumption, before seeing this, was "ok, I'm going to guess land in a city is worth 100 or 1000x land anywhere else", and I guess I overestimated a bit.


The Bronx isn't "anywhere else", it's a region of New York City (one of the five boroughs), just like Manhattan is.


Having grown up in NYC, I’m well aware. I’m also well aware that the Bronx is rather different from Manhattan.


Land and "improvements" are assessed separately, and I believe this is plotting just the assessed land values. In the small text about each map, it says to use the settings to switch to full assessed value or improvements. But still, it's very hard to actually assess land value in an area like Manhattan where there are basically no land-only transactions


Its market cap is about $58b right now. Tripled in three years!


But the company only sold the shares at $19.3B.


But they also only sold a very small number of shares at that valuation, vs all of them at $20B to Adobe.


this is the answer


Right, which gave them 19B (ish) to play with and they are an independent competitor to Adobe.

Mergers trigger layoffs


Figma raised $1.2B in their IPO. Total shares listed != money raised, not by a long shot. Most shares are just to give liquidity to existing shareholders of the company.


If they have to issue shares the higher valuation is significant


The employees will have to wait 180 days (at least that's the standard) before selling any shares, so they usually feel the effects of a "bounce".


Nice. Very light-weight compared to proper local routers like Graphhopper, OSRM, etc., which can be overkill for simple tasks. Although the 'routing' here is nx.shortest_path, which is just Dijkstra, so pretty slow compared to other easy to implement routing algorithms (even just bi-directional Dijkstra or A*... although contraction hierarchies would be huge gain here since edge weights are fixed). Also not sure why readme describes it as an approximation? Dijkstra is guaranteed to return lowest cost path. Maybe approximation because assuming free-flow, or if the NAR dataset is incomplete?


Thx for the heads up on optimizations available. The “Approximations” comment does not apply to the shortest path calculation, but rather to the distances and upper bound times estimations. This is the consequence of enabling routing for points that dont exist as nodes (closest node approximation).


> Although the 'routing' here is nx.shortest_path, which is just Dijkstra, so pretty slow compared to other easy to implement routing algorithms

networkx has advantages of being popular, well-documented, pure python (less hassle to maintain) with code that is easy to read and modify. but, one big downside of being pure python means that it also has fundamentally poor performance: it can't use a cpu efficiently, the way the graphs are represented also means it can't use memory, memory bandwidth or cache efficiently either.

orthogonally from switching the search algorithm, one quick way to potentially get a large speedup is try swapping out networkx for rustworkx (or any other graph library with python bindings that has native implementations of data structures and graph algorithms)

another thing to check would be to avoid storing auxiliary node/edge attributes in the graph that aren't necessary during search, so that cache and memory bandwidth can be focused on node indices and edge weights.

I went down a rabbit hole playing around with this some years ago (using Cython not rust). Relatively simple things like "store the graph in an array-oriented way (CSC/CSR sparse matrix format or similar)" and "eliminate all memory allocation and pure python code from the Dijkstra search, replace it with simple C code using indices into preallocated arrays" gets you pretty far. It is possible to get further performance increases by reviewing and tweaking the search code to avoid unnecessary branches, investigating variants of the priority queue used to maintain partial paths by path distance (i found switching the heap queue from a binary tree to a 4-ary tree gave a 30% reduction in running time), seeing if the nodes of the graph can be reindexed so that nodes with similar indices are spatially similar and more likely to be in cache (another 30% or so reduction in running time from Hilbert curve ordering). Some of this will be quite problem and data dependent and not necessarily a good tradeoff for other graphs. All up I got around a 30x speedup vs baseline networkx for dijkstra searches to compute path distances to all nodes from a few source nodes on a street network graph with 3.6m nodes & 3.8m edges (big enough not to fit in L3 cache for the CPU i was running experiments with).


What does light-weight mean in this case? Less data? Ease of installation?


How did you decide on routing engine? I’ve used Graphhopper in the past — is OSRM an improvement?


I actually tried a couple different engines before landing on OSRM. I started with R5 (since it can also do public transit) then switched to Valhalla.

The main limiting factor was speed. Basically all routing engines except for OSRM are too slow to compute continent-scale travel time matrices. For reference, it took Valhalla around a week to finish 3 billion point-to-point pairs. OSRM did the same calculation in about 2 hours.

I can't speak to Graphhopper since I haven't tried it. Maybe something to test in the future!


Yeah OSRM precomputes routes so if you just need the same mode of transportation and not dynamic params (like avoid tolls on this route, etc) it's gonna be a lot faster. Valhalla was designed for flexible/dynamic routing


It precomputes partial routes that are combined at run time. :)


Makes sense! Heads up that Graphhopper’s matrix algorithm isn’t open sourced so probably won’t work for this use case. I’ve had good experiences with it otherwise.


People from both countries can speak a version of Spanish that is mutually intelligible. But if you go into a high school (or even listen to adults that are being very casual) then it'd sound wildly different.

I speak Chilean Spanish. Distinctive characteristics include no use of vos; the "tú" conjugation is often "-ai" (cómo estai?) or "-i" (qué teni allí?); saying weón every sentence; using "po" for emphasis (sí po!); specific words like "fome" (boring), "la raja" (awesome), "bacán" (cool); phrases like "estoy cagado de hambre", "estoy chato", "pasarlo chancho", "cachai?"...

It's also very related to class, at least in Chile. Even I struggle to understand people in tougher neighborhoods of Santiago.


sorry, i was talking about the differences between buenos aires spanish and uruguay spanish. i totally cacho that chilean is a different language entirely, however much germán garmendia tries to pretend otherwise :)


Ohhh. That makes way more sense. I was surprised “contigo” was only difference that came to mind haha


Q: How many meanings does "weón" have?

A: Yes


It's worth noting that the author (Christopher Brunet) is an intentionally inflammatory right-wing commentator, not an economist. Do not take this blog seriously.

The short story: EJMR is an anonymous forum used by economists that occasionally has job market rumors, but is more often filled with sexism/racism/etc. They anonymized using a hash of IP + thread ID with no salt. Three economists realized you could identify the location of many posts and wrote a paper showing how much of the toxic language came from top university IP addresses. Naturally people on the forum (and sympathizers like this writer) are collectively losing their minds, realizing that they may not be as anonymous as they had assumed. So they are threatening legal action, claiming "doxxing", and writing stupid blog posts.


So in other words they did get doxxed, you're just happy about it. You say not to take the blog seriously but then affirm his claims

It's very short-sighted to gloat about someone else's anonymity being lifted just because you don't like them. You may not think you have something to hide today, but that doesn't mean you'll have nothing to fear tomorrow

The claim that this is "Every economist on Mastodon" is of course a stupid exaggeration, but it's still a significant enough event to warrant reporting


Saying they "realized you could identify the location of many posts and wrote a paper" is downplaying the situation. The authors essentially ran a lookup table attack, computing over 3 quadrillion hashes to crack the IPs. The website owner incorrectly thought and claimed this would protect IPs, which are PII.

For more context, see here: https://marginalrevolution.com/marginalrevolution/2023/07/th...


I knew nothing about him but very, very quickly got a “right of center troll” kind of vibe.


Thank you! Your summary makes for an interesting read and makes much more sense than the article.


In the US, inventors (not the same as entrepreneur, but often related) are also disproportionately likely to be immigrants

> We find immigrants represent 16 percent of all US inventors, but produced 23 percent of total innovation output, as measured by number of patents, patent citations, and the economic value of these patents. Immigrant inventors are more likely to rely on foreign technologies, to collaborate with foreign inventors, and to be cited in foreign markets, thus contributing to the importation and diffusion of ideas across borders.

https://www.nber.org/papers/w30797


Aren't immigrants about 15% of the US population though? It's not that disproportionate.


You can have weak inequalities in the preferences over alternatives -- e.g. you're indifferent between all rankings where your preferred candidate is number 1


He's referring to research by Nick Bloom, an economist at Stanford. They've worked together.

Ctrl-F management on his research page (https://nbloom.people.stanford.edu/research). In one paper, they randomly assign consulting services from a management consultancy to manufacturing plants in India and found a 17% increase in productivity.


not sure that means it generalizes to all countries or types of business


Oh, definitely not! As with all RCTs, gotta worry about generalizability.

He has a lot of other work showing correlations between management and productivity at much more scale, but this was the one attempt (that I know of) trying to establish something more causal


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: