Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think, ultimately, there's no easy way to do it.

How do you stop spammers and bad actors in general.

> the ability to claim a unique username

Which would require a centralisation, presumably. We have things like OpenID, but that hasn't really caught on.

What we want is internally inconsistent. We want both unique identifiers (this is the "real" "blippage" making this post), the possibility of multiple identities, and the possibility of anonymity.

We also want openness, but are likely to desire censorship for legitimate reasons, too.

Just how are we going to square the circle?

Li'l update: I see NFTs being mentioned, which sounds a bit band-wagony to me. In the Gemini protocol one creates TLS certificates. The Lagrange browser can generate a cert quite simply. So in at least some sense, isn't the identity issue "solved".

Another update: there's also the issue of discoverability. Centralised networks make discoverability easy. It's not insurmountable with decentralised ones (people are willing to host aggregators), but it is harder.



A social network where you pay to join with the ability of only one account per payment method might stop a lot of nonsense. Make the signup fee high (ex. $100 USD) and monthly fee low ($2) to steer people towards 1 account.

That idea won't get traction because people don't want to pay for a social network. But damn do people love "reality internet" - outrageous content, reaction videos and otherwise fabricated "content" I equate to Reality TV like "The Real Housewives of ..."

In my opinion, it should also be closed in the sense of no viral/global messaging. With zero need for advertising money, you don't need eyeball time or to be promoting controversial content above people posting pics of their dinner. Let people be social about stuff they are interested in and connect with people they know. That's the other bit - you can't randomly connect/view content by people not in X degrees of separation from you.


We could imagine a paid social network designed to not engage in this toxic attention-grabbing behavior. I guess such a network would get less attention... I mean, that's almost a condition of success, right? But then, has it "gotten traction?"

Perhaps social interaction could be thought of as sort of like food, and social media could be thought of as sort of like junk food -- maybe this paid social media site could be thought of as a less-damaging form of junk food. The metric for success could be something like <user satiation>/<wasted time>.


I generally disagree about the "pay to join" issue, but the thing you're right about is no viral/global messaging. That's the thing that really turns social networks into garbage. The problem being, though, that under late capitalism, that's what a large number of people really want out of them — to build a personal brand and social capital that can be converted into financial capital.


I understand the disagreement about pay to join. A social network needs to pay its bills somehow and there are 2 methods - users pay or advertisers pay. Or I suppose with sites like youtube its both (users buying stuff with affiliate links, etc.) Pay to join eliminates the need for viral/global messaging which is the draw to get advertisers. Influencers and the like is just bullshit and it promotes look-at-me behavior which is pretty anti-social. I’d argue those types of people are really noisy and fewer in number than casual social network users.

Maybe there is some middle ground where its free for users and advertisers can pay to advertise but there won’t be the viral content that advertisers want. One approach could be the local groups that naturally form - book clubs, mom’s groups, gamers, etc. could be tagged as such so an advertising model can happen.


In my opinion, users should pay for the upkeep of their social network; I just don't think that costs should be used as a tool to try to control behavior or filter out undesirable users (because I believe there will be many desirable users who will be unable to pay, and undesirable users who will be able). In general, I think the "Public Radio" or Patreon model works best for noncommercial social networks.


> > the ability to claim a unique username

> Which would require a centralisation, presumably. We have things like OpenID, but that hasn't really caught on.

The article addresses this. We already have a decentralized, credibly neutral identity layer with unique names in production (and pretty wide use, at least in the web3 space): Ethereum Name Service.


About squaring the circle; I remember the Slashdot karma moderation system with community based optional filtering and tresholds.

https://slashdot.org/faq/karma.shtml https://slashdot.org/moderation.shtml

The same goes for democracy and politics, how to prevent bad actors from gaming the system and acquire illegitimate power.

When moderating a 5000 person topic focused community I used a ruleset which was easy to understand. Along the way I tweaked the ruleset to defend against bad actors using the rules against me.


Yeah, I think a karma-based system is about the right way to go. One system I thought of was that the karma propagated outwards, as it were. So I award A with +1 karma because I like their content and trust them. If A awards B with +1 karma then the karma trickles up to me and I'm more likely to read B. Not sure how my system could be turned into reality, though.


Per-user karma maybe? You could have global and per-user karma. Perhaps you could generate views server side based on global karma, then sort client-side based on user-assigned karma?


This is really just a Web of Trust, right?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: