Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This an excellent response, and thank you for writing it (and for considering Matrix so highly).

I think the main reason why this isn't going to turn into Reddit is that the reputation is strictly relative, and hopefully we can set a precedent for proportional behaviour.

So, while you will always have some set of people who think you shouldn't discuss firearms and assign folks in firearm communities negative reputation, hopefully that would never result in widespread bans or filtering. If a moderator got overenthusiastic and imposed a blanket ban, hopefully their community would rise up and vote with their feet (just as you would today if a rogue moderator got trigger-happy with /ban). Meanwhile, if someone got overenthusiastic with blanket filter rules (i.e. "automatically assign negative weight to content from users from these communities so it's hidden by default"), then individuals could and should override it by assigning their own filters.

We've tried to model it after real life, for better or worse. If someone chooses to use the same identity for discussing both firearms and (say) cooking, then they may get shunned by some cookery folks. But most of the cook group won't care and ignore it. You might get unlucky and discover the head chef is anti-firearms and kicks you out, but frankly that sounds like a good reason to find a more broadminded cook group.

You're right that a large server like Matrix.org could take an opinionated view and go and apply radical blanket bans for all sorts of stuff, and set a precedent that it's okay to throw your weight around. But a) we're not going to do that - for instance we haven't yet blocked any server from Matrix.org; b) the only blanket bans we're considering for Matrix.org at this time are against spam, child abuse, and folks conspiring to kill; c) we're pretty confident that if Matrix.org overstepped its bounds on moderation, the network would route around the damage anyway: there are something like 55,000 servers that we're aware of on the network, and we believe Matrix.org accounts for only about 20-30% of the traffic (and that includes IRC bridges etc). So folks would vote with their feet and shift server - especially once we have portable accounts.

Finally, P2P Matrix will change the dynamic entirely on this - no more servers (by default) means that users will be entirely making their own choices on where to hang out and who to hang out with.



> hopefully their community would rise up and vote with their feet (just as you would today if a rogue moderator got trigger-happy with /ban)

This doesn't tend to happen though, in communities offline or online. My own experience (particularly in online communities) is that once groups achieve a certain momentum, the average participant in the group cares very little about policies that impact a tiny minority of users no matter how draconian they are.

If it were true in general that people would "vote with their feet" we wouldn't have autocrats in positions of power or much injustice in the world at all.

I love Matrix and have been running my own Synapse instance for friends and family for years -- I appreciate all the work that has gone into it. And further, I appreciate that this is a challenging problem to solve in general, and that you're unfortunately being forced to come up with "lesser evil" solutions to counter all of this talk of E2E encryption being the "tradecraft of terrorists and child pornographers".

I'm just deeply disappointed to see it -- social credit styled systems like this penalize people for having bad mental health days, or who have unusual interests or points of view, or who are LGBTQ+, or belong to some other vulnerable minority group, and I think generally have a chilling effect on communities -- or rather on these specific subcultures within those communities, clearly not all of whom are "bad faith" actors.


> My own experience (particularly in online communities) is that once groups achieve a certain momentum, the average participant in the group cares very little about policies that impact a tiny minority of users no matter how draconian they are.

Agreed this can be a problem - but I wonder if it can be solved with UX? If the filtering rules are really obvious, and the clients give you the ability to actually visualise and curate the filters being applied, I'd hope that it would be much easier to spot toxic moderation.

> I'm just deeply disappointed to see it -- social credit styled systems like this penalize people for having bad mental health days, or who have unusual interests or points of view, or who are LGBTQ+, or belong to some other vulnerable minority group, and I think generally have a chilling effect on communities -- or rather on these specific subcultures within those communities, clearly not all of whom are "bad faith" actors.

Hm, "social credit" implies an absolute reputation system (like Reddit, HN, China etc ;) which this categorically isn't.

The idea is that if you as a user want to subscribe to a reputation feed which prejudices against minority groups - then that's your bad choice to make. You'll find yourself having to explicitly remove the filter to follow the conversations around you. Alternatively, if you find yourself in a community which has engaged a blanket ban or filter on minorities, you may want to find a different community.

We get that there is a massive responsibility on the Matrix team to implement UX for this which is designed against factionalism, censorship, filter bubbles, absolutist social credit, persecution, polarisation, antagonism etc. But we also feel a massive responsibility to stop users getting spammed with invites to child abuse/gore/hate rooms, or from accidentally hosting content which could get them incarcerated.

Critically, this stuff doesn't really exist yet - the first developer hired to work fulltime on it hasn't started yet. So this is the perfect time to give feedback to help ensure this rep system actually works (at least as well as real life society does) and not go toxic like so many others have done before it.


> I wonder if it can be solved with UX?

I saw a HN comment the other day, talking about the problem of popular servers in the fediverse blocking polarizing servers. It proposed a solution at the client level: make it easy to switch between identities on different servers. My addition: UX along the lines of Firefox multi-account containers, where you can have multiple tabs (here: conversations/rooms) with different identities open alongside each other, rather than having to switch profiles.

I think making it easier to participate in other communities without losing access to the large one, should the large one stop federating, is a good strategy for encouraging users to vote with their feet. Otherwise, it's hard to justify losing access to the large community, just to enable interaction with a small one.

I'm not sure how this solution squares against combating abuse, though; encouraging multiple accounts might make this harder.


Thanks for your response here. You mention that it isn't an "absolute reputation system" -- that makes sense. I'd like to know more about that because I think I'm misunderstanding the proposal.

I understand that it isn't a single dimension (like HN or Reddit karma) -- it seems your proposal is more or less a "tagging" system, but anonymized so that other users wouldn't be able to say, look at my profile and see how other users have tagged me. But other users or operators could apply a filter to exclude comments from users based on those tags? I feel like my understanding must be incomplete because that wouldn't be very anonymous -- e.g. if I applied a filter to exclude users tagged with "gopher-enthusiasts", and suddenly stopped seeing messages from a certain active members of a community I was in, that would "out" those users as "gopher-enthusiasts". So I assume the system you're proposing is more sophisticated than that. Can you clarify?

Based on my reading of what was proposed, I'm particularly concerned about these scenarios:

1. Say you have a community centred on the goldfish keeping hobby. It happens to be the largest such community -- many thousands of members. And (to use the example given by OP), suppose a large contingent of moderators (or perhaps even the server operator themselves) are super averse to guns enthusiasts, to the point where they'll either ban anyone they learn to be a gun enthusiast, or persistently tarnish any members' reputation score (for lack of a better word) until they leave, or simply won't allow anyone who is also in a gun community to participate (does this reputation system give others insight into what communities I'm in?). This is problematic because there aren't many other goldfish communities worth participating in -- all the experts participate in this one, and as a newcomer to the goldfish keeping hobby, I won't get very far "voting with my feet" here. Presumably the overwhelming majority of participants won't care or even necessarily notice that guns enthusiasts are "filtered" from the group.

Nothing stops this particular scenario from playing out today, but I guess my concern is that the proposed system would make this scenario super easy for operators to implement -- I would be filtered from the group without anyone knowing anything about me other than that maybe I had some association with a gun focused group at some point

2. Say you're having a mental health crisis, and you end up saying some stuff in a channel with many participants that you regret later. How does that impact your global reputation, and for how long?

3. Say you're a Marxist and participate in Marxist discussion groups. What kind of metadata will the reputation system generate about you? Is it possible that it'll put you on the same filter/tag lists as "terrorists" and people who advocate for the overthrow of governments, even if you don't personally hold such beliefs?

4. (Mostly a rehashing of #3) Say you're involved in some (legal, consensual, 18+) fetish community. Are you now globally on a filter list for sexual deviants that will keep you from joining, say, a parenting community?


> 1. Say you have a community centred on the goldfish keeping hobby. It happens to be the largest such community -- many thousands of members.

[...]

> 4. (Mostly a rehashing of #3) Say you're involved in some (legal, consensual, 18+) fetish community. Are you now globally on a filter list for sexual deviants that will keep you from joining, say, a parenting community?

Would these be good usecases for having two identities?


This is a complex issue. I'm not sure if Matrix's reputation system can be used to "call out" individuals or groups, like Twitter callouts (or blocklists/blockchains) operate. However, I have the experience of seeing someone on Discord being called out on Twitter for being a pedophile (though I didn't see that because Discord doesn't let you subscribe to reputation services which comments on users as you see them), and then reregistering under a different Discord identity and joining servers without saying the original identity. So this is already happening.

Twitter now shows posts liked (not just retweeted) by those you follow. I've heard that has led to "like policing", in addition to avoiding people based on who they follow.

Are reputation feeds going to be subject to threats of libel lawsuits if used for false reporting?


As Yoric says (btw, Yoric: check your DMs ;P), #1 and #4 sound like a good case for maintaining different personae.

For #2, i guess you'd need to petition whoever maintains the blocklists who were blocking you. Or give up and start a new identity.

For #3, Yeah, there's a risk here that somebody enthusiastically starts building a reputation list for The Best Marxist Content and puts the hash of your user ID on it. Someone then reverses the hash via dictionary lookup or similar and proves that you were on the list, and promptly tries to arrest you. Frankly that risk exists already today, though. We may need to try to think of better pseudonymisation approaches than just hashing though.


> If someone chooses to use the same identity for discussing both firearms and (say) cooking, then they may get shunned by some cookery folks.

It seems like you're conflating two different kinds of things under "reputation".

If I start talking about firearms in a cooking group, obviously I'm way off topic and I should expect my posts to be moderated. If I start spouting insults, that's not just off topic but abusive and I should expect to be treated accordingly. And if I do those sorts of things in multiple forums, I should expect to get a global reputation for not being a good participant and suffer the appropriate consequences.

But if I come into a cooking group and just talk about cooking, why should the information that I'm also in a completely different group talking about firearms even be relevant? Why should "being a member of a group that talks about firearms" be part of my global reputation at all?


I got hit by this with reddit and the masstagger. Normally, people aren't going to go through the post history of someone who says "look at these muffins I baked", and find they posted in an unsavory sub years ago. But with masstagger, anyone who posted in T_D got a big ol flare calling them out. But, the only reason I went there was to say "hey, you guys are stupid racists" to some posts. But the actual content of my posts weren't discussed, merely the fact that I filthied myself by even going near that sub. For Matrix, it would be like someone going onto a firearms room to talk about increasing gun control laws in the US, but then that means that all of the sudden your global reputation includes "firearms", and people on cooking topics will call you an asshole who supports murdering children, even if you are the most anti-firearms person in the world.


> But if I come into a cooking group and just talk about cooking, why should the information that I'm also in a completely different group talking about firearms even be relevant?

Agreed, it shouldn't be relevant, but unfortunately it often is. People are so mired in identity politics that your affiliations outside of a particular group will often matter within that group. It's not fair and it's not rational, but that's what discourse has devolved to, sadly.


> People are so mired in identity politics that your affiliations outside of a particular group will often matter within that group.

I understand that people are often like this. What I'm wondering is why Matrix would include such information in the reputation system they say they're building, since that basically just encourages people to look at irrelevant information and engage in identity politics instead of discouraging it.


a filtering system which doesn't have the concept of identity sounds interesting; how would it work?


Yeah, without the ability to query a particular identity it's not clear (to say the least) how you would go about filtering out malicious identities.

To some extent, identity politics based on public associations is an unavoidable problem. Public data can be scraped so someone is probably going to aggregate and analyze it at some point.

That said, it seems important that a reputation system not facilitate making associations that otherwise wouldn't have been visible. To that end, it's important to take care not to accidentally incentivize community moderators to leak information that otherwise wouldn't have been discoverable by the general public.

In particular, it occurs to me that a naive reporting mechanism inherently reveals that the reported identity has associated with the reporting operator. I assume you've given this problem some thought - is there an obvious way around it? My concern would be that a more general use reputation system (ie one that goes beyond simple "illegal content" and "spam" event reports) would rapidly begin leaking association data on a broad scale, even from otherwise private communities.

I guess the goals here are at odds in a fundamental way. A server should be able to report scores for it's participants. Querying an identity should reveal it's various scores. Now repeat spammers (or those posting illegal content, or who are just generally assholes, or whatever) can be filtered out. But in being able to query scores for an identity I don't see how you can avoid revealing the entity that reported any given score. If a broad set of categories are being reported (consider, for example, the birthday cake example from the article) then the information leakage seems like it would end up being quite broad.


Perhaps rather then filtering per se, focus on the economic equation of moderation. Fundamentally moderation is about the time/resource cost of the moderator(s) vs the time/resource cost of evasion. Good moderation is probably an AI-complete problem, so it's hard to automate right now. Most efforts at improving that seem to either use broad brush measures and heuristics on the mod side, or so-so proxy measures for the evasion side. From captchas to money to asking for ID, all at the end of the day are about trying to make it more 'expensive' to evade bans. If the expense is really high, then even a small bit of moderation can keep up.

But instead of any of that why not just do a time token directly and with full pseudonomyity? Matrix.org could ask people to do something like brute force RSA, choosing key lengths based on how much time they want to represent, and then sign a "Time Level" certificate result. Community operators could then dynamically adjust how much "time investment" they wanted to require in order to participate, and would have a mechanism to ban independent of IP or anything else. And this could be expected to increase over time as people let their systems run a day or two a month. If in a few years it requires a token equivalent to a month of computation time that would be a high bar to evasion. It would not require any money, identity, or knowledge of behavior elsewhere, but people would have strong incentives not to burn their Time Level tokens, or at least to comply with non-permanent bans. You could further tweak things by having per-community tokens which can only be issued once per cert, so identity can't be as easily tracked across communities while still stopping evasion. This should all be near completely automatable as well.

Anyway, just some musing. I guess the question is if communities had effective cryptographicly guaranteed rate controls and moderation stickiness, would they really need more in practice to keep up?


Don't measures based on computational difficulty have the effect of erecting an impossibly high barrier for those with limited access to such resources? (The poor, anyone using a mobile device, etc). Any system with a reasonably low bar can probably be worked around by a spammer at a low enough cost per account that it's unlikely to be of much use.

Some services (many subreddits, for example) use account age as a metric. That's easy to work around with mass registrations of zombie accounts though.

Some services (again many subreddits) use overall network reputation as a metric. That makes life difficult for new users though in addition to all the privacy issues surrounding centralized reputation and identity.

Some services use a phone number as a unique identifier that's more difficult to come by than an email or IP address. That still poses an accessibility issue and also introduces a privacy one.

Sorry, I don't actually have a solution here. Just a bunch of problems.


>* Don't measures based on computational difficulty have the effect of erecting an impossibly high barrier for those with limited access to such resources?*

I don't think this is true so much with the flattening of computational growth. These days a 10-15% general gain year over year is quite good, and we've seen gens with less. Order of magnitude is close enough in this case, there isn't that much practical difference between a week vs two weeks.

>(The poor, anyone using a mobile device, etc).

Case in point, for much of the population their mobile devices may well be their most powerful ones. A 7 year old PC can still be extremely capable, etc.

Also again, this is just another tool idea. A community can use it to whatever degree they deem appropriate, and that can vary dynamically. So in regular times a low volume community might set a very low level, just a few minutes worth say. But if there was a sudden influx, they could temporarily ramp it up for new joiners.


>for instance we haven't yet blocked any server from Matrix.org

This is stretching the truth. I remember there was a time where Matrix.org attempted to purge a lot of channels related to image board communities. Channels that started with the format of /?/ were deleted from Matrix.org. This includes the federated version of channels from other homeservers. I think it is disingenuous to say that you do not block any other homeservers when you have deleted channels from other homeservers, preventing them from federating properly. Some users have been banned from official channels on the Matrix.org server because of the homeserver they were registered on. Perhaps things have changed since I was last involved with Matrix, but from what I saw the Matrix.org homeserver was to be avoided since they did not play nice.


I'm not aware of us ever having doing any en-masse removal of /?/ style rooms from Matrix.org server. However, it's true that we do remove individual rooms from the server if they break the server's T&Cs (https://github.com/vector-im/policies/blob/master/docs/matri...) - but that's completely different to unilaterally blocking other servers or shutting down rooms based on the pattern of their name(!), which we don't (so far, at least).

There's a whole bunch of conspiracy theories that we do block servers though - but ironically this tends to be due to federation problems (often the remote server hasn't tuned its rate limits, and so as its users get more active, busier servers trying to talk to it get rate limited. matrix.org is one of the busiest servers, therefore the first symptoms of the problem are that it looks like matrix.org is explicitly blocking the server. comedy, eh?).

However, if folks want to believe the conspiracy theory instead, we're not going to shed too many tears.


I wouldn't have written it if I hadn't seen that you frequently engage on these threads, so give the credit to yourself for actually building something for the community, instead of just talking about it in one-way blog posts.

>We've tried to model it after real life, for better or worse. If someone chooses to use the same identity for discussing both firearms and (say) cooking, then they may get shunned by some cookery folks. But most of the cook group won't care and ignore it. You might get unlucky and discover the head chef is anti-firearms and kicks you out, but frankly that sounds like a good reason to find a more broadminded cook group.

>You're right that a large server like Matrix.org could take an opinionated view and go and apply radical blanket bans for all sorts of stuff, and set a precedent that it's okay to throw your weight around

My biggest fear is both of these things in tandem. I agree that individually, they're solvable problems. For instance, I'm not actually worried about Matrix.org banning me by association, and in fact, welcome a reduction of child porn, spam, etc. And if there was a plethora of cooking groups, and getting banned by one still left me with others, it wouldn't be a huge deal.

But its the potential combination of clout and curation that makes me concerned. For instance, email is a "federated" standard, in that anyone can get a domain, and host their own email server. However, for all intents and purposes, it's not a federated standard because nearly everyone uses Google, and getting Google to accept email from my domain is nearly impossible for a variety of factors that I don't have control over. That means that effectively, it's no longer federated, unless you play by Google's rules.

I don't know how to solve this problem, and I feel bad complaining about this solution without proposing a better one. I see the predicament that you're in, as well. Governments obviously want to be able to peek into conversations, and will use any excuse they can get to do it. If you try to play the game by their rules, and come up with a way to preserve privacy while stopping the spread of things like child porn, by democratizing moderation of the entire ecosystem, then I see how that potentially solves the problem, or at least kills the child porn excuse. But I think the game is rigged, and this capitulation comes with costs to the platform, that, because of the rigged game, ultimately won't protect what you're trying to protect. Governments will come up with some other reason why they need access to private conversations, and instead of a single death knell, it'll instead be a drawn out one.

The only solution is to replace the game with something entirely different, the same way for instance, that cryptocurrency did with financial markets. If they had tried to play by the rules of the existing game, it'd have gotten nowhere, because the financial game is designed to specifically stop things like that.

>Finally, P2P Matrix will change the dynamic entirely on this - no more servers (by default) means that users will be entirely making their own choices on where to hang out and who to hang out with.

I'm very excited about this, because this is the type of outside of the box thinking that I think falls into the category of "whole other game where existing rules don't matter". I really hope you guys can get the UX good enough to pull it off with wide adoption.


> But it's the potential combination of clout and curation

Our plan for the Matrix.org server is, in an ideal world, to turn it off once we have portable accounts (and especially once we have P2P). Users can easily pick a set of other servers for home, or just use their devices. We have absolutely zero desire for any server to have clout or to end up as a Gmail-style centralisation point.

However, in an account-portable world, I suspect all we'll see is that communities (rather than servers) will emerge which have equivalent risk of disproportionate social influence. Then all we can do then is to arm the users with tools which allow them to visualise and curate that influence and make up their own minds, rather than accidentally getting trapped in someone else's filter bubble all over again.

> The only solution is to replace the game with something entirely different, the same way for instance, that cryptocurrency did with financial markets. If they had tried to play by the rules of the existing game, it'd have gotten nowhere, because the financial game is designed to specifically stop things like that.

From my perspective, introducing a morally relative reputation system as a core primitive in the protocol, is very much replacing the game with something entirely different. Imagine if SMTP had the concept of subjectively modelling spam built in from day 1. Or if the Web had had the concept of subjective search result quality.

Nobody has pulled this off before (as far as I know?) but we're having a go at it to see what happens. If it goes horribly wrong then worst case we just turn it off as a failed experiment.


Really hope it works out! Good luck to you and your team.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: