|
| pphysch wrote:
| Musk is betting on the $8 membership being a big hit, which
| immediately addresses a lot of the moderation issues.
|
| It's gonna be a completely different paradigm than reddit.
| Herding cats into a box painted onto the ground vs. herding cats
| into a 8' high cage.
| eachro wrote:
| Reddit had the benefit of subreddit moderators policing their
| own. Twitter has no such thing. Maybe if you squint enough, big
| accounts block/muting bad actors in their replies can sort of
| count as self-policing but that does not prevent the bad actor
| from being a troll in someone else's replies.
| danuker wrote:
| > Spam is typically easily identified due to the repetitious
| nature of the posting frequency, and simplistic nature of the
| content (low symbol pattern complexity).
|
| Now that we have cheap language models, you could create endless
| variations of the same idea. It's an arms race.
| dbrueck wrote:
| At least one missing element is that of _reputation_. I don 't
| think it should work exactly like it does in the real world, but
| the absence of it seems to always lead to major problems.
|
| The cost of being a jerk online is too low - it's almost entirely
| free of any consequences.
|
| Put another way, not everyone deserves a megaphone. Not everyone
| deserves to chime in on any conversation they want. The promise
| of online discussion is that everyone should have the _potential_
| to rise to that, but just granting them that privilege from the
| outset and hardly ever revoking it doesn 't work.
|
| Rather than having an overt moderation system, I'd much rather
| see where the reach/visibility/weight of your messages is driven
| by things like your time in the given community, your track
| record of insightful, levelheaded conversation, etc.
| 22c wrote:
| > The cost of being a jerk online is too low - it's almost
| entirely free of any consequences.
|
| Couldn't agree more here.
|
| Going back to the "US Postal service allows spam" comment made
| by Yishan, well yes, the US postal service will deliver mail
| that someone has PAID to have delivered, they've also paid to
| have it printed. There's not a zero cost here and most
| businesses will not send physical spam if there weren't at
| least some return on investment.
|
| One big problem not even touched by Yishan is vote
| manipulation, or to put it in your terms, artificially boosted
| reputation. I consider those to be problems with the platform.
| Unfortunately, I haven't yet seen a platform that can solve the
| problem of "you, as an individual, have ONE voice". It's too
| easy for users to make multiple accounts, get banned, create a
| new one, etc.
|
| At the same time, nobody who's creating a platform for users
| will want to make it HARDER for users to sign up. Recently
| Blizzard tried to address this (in spirit) by forcing users to
| use a phone number and not allowing "burner numbers" (foolishly
| determined by "if your phone number is pre-paid"). It
| completely backfired for being too exclusionary. I personally
| hate the idea of Blizzard knowing and storing my phone number.
| However, the idea that it should be more and more difficult or
| costly for toxic users to participate in the platform after
| they've been banned is not, on its own, a silly idea.
| ledauphin wrote:
| I agree with the basic idea that we want reputation, but the
| classic concept of reputation as a single number in the range
| (-inf, inf) is useless for solving real-world problems the way
| we solve them in the real world.
|
| Why? Because my reputation in meatspace is precisely 0 with
| 99.9% of the world's population. They haven't heard of me, and
| they haven't heard of anyone who has heard of me. Meanwhile, my
| reputation with my selected set of friends and relatives is
| fairly high, and undoubtedly my reputation with some small set
| of people who are my enemies is fairly low. And this is all
| good, because no human being can operate in a world where
| everyone has an opinion about them all the time.
|
| Global reputation is bad, and giving anyone a megaphone so they
| can chime into any conversation they want is bad, full stop.
| Megaphone-usage should not be a democratic thing where a simple
| majority either affirms or denies your ability to suddenly make
| everyone else listen to you. People have always been able to
| speak to their tribes/affinity groups/whatever you want to call
| them without speaking to the entire state/country/world, and if
| we want to build systems that will be resilient then we need to
| mimic that instead of pretending that reputation is a zero-sum
| global game.
| jacobr1 wrote:
| Social reputation IRL also has transitive properties -
| vouching from other high-rep people, or group affiliations.
| Primitive forms of social-graph connectedness have been
| exposed in social networks but it doesn't seem like they've
| seen much investment in the past decade.
| paul7986 wrote:
| The Internet needs to have a verified public identity /
| reputation system especially with deep fakes becoming more and
| more pervasive/easier to create. Trolls can troll all they want
| but if they want to be serious with their words then back it up
| with his or her online verified public Internet /reputation ID.
|
| If this is one of Musk's goals with Twitter he didn't overpay.
| The Internet definitely needs such a system..has for awhile
| now!
|
| He might connect Twitter into the crypto ecosystem and that
| along with a verified public Internet / Reputation ID system i
| think could be powerful.
| pixl97 wrote:
| How does this system work worldwide across multiple
| governments, is resistant to identity theft, and prevents
| things like dictatorships from knowing exactly who you are?
| jonny_eh wrote:
| Remember keybase.io? They still exist, but not as a cross-
| platform identity system anymore.
| runako wrote:
| It's worth noting that Twitter gets a lot of flak for
| permanently banning people, but that those people were all
| there under their real names. Regardless of your opinion on
| the bans, verifying that they were indeed banning e.g. Steve
| Bannon would not have helped the decision making process
| around his ban any easier.
| runako wrote:
| This is a good idea, except that it assumes _reputation_ has
| some directional value upon which everyone agrees.
|
| For example, suppose a very famous TV star joins Twitter and
| amasses a huge following due to his real-world popularity
| independent of Twitter. (Whoever you have in mind at this
| point, you are likely wrong.) His differentiator is he's a
| total jerk all the time, in person, on TV, etc. He is popular
| because he treats everyone around him like garbage. People love
| to watch him do it, love the thrill of watching accomplished
| people debase themselves in attempts to stay in his good
| graces. He has a reputation for being a popular jerk, but
| people obviously like to hear what he has to say.
|
| Everyone would expect his followers to see his posts, and in
| fact it is reasonable to expect those posts to be more
| prominent than those of lesser-famous people. Now imagine that
| famous TV star stays in character on the platform and so is
| also total jerk there: spewing hate, abuse, etc.
|
| Do you censor this person or not? Remember that you make more
| money when you can keep famous people on the site creating more
| engagement.
|
| The things that make for a good online community are not
| necessarily congruent with those that drive reputation in real
| life. Twitter is in the unfortunate position of bridging the
| two.
| dbrueck wrote:
| I posted some additional ideas in a reply to another comment
| that I think addresses some of your points, but actually I
| think you bring up a good point of another thing that is
| broken with both offline and online communities: reputation
| is transferrable across communities far more than it should
| be.
|
| You see this anytime e.g. a high profile athlete "weighs in"
| on complicated geopolitical matters, when in reality their
| opinion on that matter should count next to nothing in most
| cases, unless in addition to being a great athlete they have
| also established a track record (reputation) of being expert
| or insightful in international affairs.
|
| A free-for-all community like Twitter could continue to
| exist, where there are basically no waiting periods before
| posting and your reputation from other areas counts a lot.
| But then other communities could set their own standards that
| say you can't post for N days and that your incoming
| reputation factor is 0.001 or something like that.
|
| So the person could stay in character but they couldn't post
| for awhile, and even when they did, their posts would
| initially have very low visibility because their reputation
| in this new community would be abysmally low. Only by really
| engaging in the community over time would their reputation
| rise to the point of their posts having much visibility, and
| even if they were playing the long game and faking being good
| for a long time and then decided to go rogue, their
| reputation would drop quickly so that the damage they could
| do would be pretty limited in that one community, while also
| potentially harming their overall reputation in other
| communities too.
|
| As noted in the other post, there is lots of vagueness here
| because it's just thinking out loud, but I believe the
| concepts are worth exploring.
| runako wrote:
| These are good ideas that might help manage an online
| community! On the other hand, they would be bad for
| business! When a high-profile athlete weighs in on a
| complicated geopolitical matter and then (say) gets the
| continent wrong, that will generate tons of engagement
| (money) for the platform. Plus there's no harm done. A
| platform probably wants that kind of content.
|
| And the whole reason the platform wants the athlete to post
| in the first place is because the platform wants that
| person's real-world reputation to transfer over. I believe
| it is a property of people that they are prone to more
| heavily weigh an opinion from a well-known/well-liked/rich
| person, even if there is no real reason for that person to
| have a smart opinion on a given topic. This likely is not
| something that can be "fixed" by online community
| governance.
| sydd wrote:
| But the how would you address "reputable" people spreading
| idiotic things or fake news? How would you prevent Joe Rogan
| spreading COVID conspiracy theories? Or Kanye's antisemitic
| comments? Or a celebrity hyping up some NFT for a quick cash
| grab? Or Elon Musk falling for some fake news and spreading it?
| brookst wrote:
| Maybe? Reputation systems can devolve into rewarding
| groupthink. It's a classic "you get what you measure"
| conundrum, where once it becomes clear that an opinion / phrase
| / meme is popular, it's easy to farm reputation by repeating
| it.
|
| I like your comment about "track record of insightful,
| levelheaded conversation", but that introduces another
| abstraction. Who measures insight or levelheadedness, and how
| to avoid that being gamed?
|
| I general I agree that reputation is an interesting and
| potentially important signal, I'm just not sure I've ever seen
| an implementation that doesn't cause a lot of the problems it's
| trying to solve. Any good examples?
| dbrueck wrote:
| Yeah, definitely potential for problems and downsides. And I
| don't know of any implementations that have gotten it right.
| And to some degree, I imagine all such systems (online or
| not) can be gamed, so it's also important for the designers
| of such a system to not try to solve every problem either.
|
| And maybe you do have some form of moderation, but not in the
| sense of moderation of your agreement/disagreement with ideas
| but moderation of behavior - like a debate moderator - based
| on the rules of the community. Your participation in a
| community would involve reading, posting as desired once
| you've been in a community for a certain amount of time,
| taking a turn at evaluating N comments that have been
| flagged, and taking a turn at evaluating disputes about
| evaluations, with the latter 2 being spread around so as to
| not take up a lot of time (though, having those duties could
| also reinforce your investment in a community). The
| reach/visibility of your posts would be driven off your
| reputation in that community, though people reading could
| also control how much they see too - maybe I only care about
| hearing from more established leaders while you are more open
| to hearing from newer / lower reputation voices too. An
| endorsement from someone with a higher reputation counts more
| than an endorsement from someone who just recently joined,
| though not so huge of a difference that it's impossible for
| new ideas to break through.
|
| As far as who measures, it's your peers - the other members
| of the community, although there needs to be a ripple effect
| of some sort - if you endorse bad behavior, then that
| negatively effects your reputation. If someone does a good
| job of articulating a point, but you ding them simply because
| you disagree with that point, then someone else can ding you.
| If you consistently participate in the community duties well,
| it helps your reputation.
|
| The above is of course super hand-wavy and incomplete, but
| something along those lines has IMO a good shot of at least
| being a better alternative to some of what we have today and,
| who knows, could be quite good.
| brookst wrote:
| > Your participation in a community would involve reading,
| posting as desired once you've been in a community for a
| certain amount of time, taking a turn at evaluating N
| comments that have been flagged, and taking a turn at
| evaluating disputes about evaluations, with the latter 2
| being spread around so as to not take up a lot of time
| (though, having those duties could also reinforce your
| investment in a community).
|
| This is an interesting idea, and I'm not sure it even needs
| to be that rigorous. Active evaluations are almost a chore
| that will invite self-selection bias. Maybe we use
| sentiment analysis/etc to passively evaluate how people
| present and react to posts?
|
| It'll be imperfect in any small sample, but across a larger
| body of content, it should be possible to derive metrics
| like "how often does this person compliment a comment that
| they also disagree with" or "relative to other people, how
| often do this person's posts generate angry replies", or
| even "how often does this person end up going back and
| forth with one other person in an increasingly
| angry/insulting style"?
|
| It still feels game-able, but maybe that's not bad? Like, I
| am going to get such a great bogus reputation by writing
| respectful, substantive replies and disregarding bait like
| ad hominems! That kind of gaming is maybe a good thing.
|
| One fun thing is this could be implemented over the top of
| existing communities like Reddit. Train the models,
| maintain a reputation score externally, offer an API to
| retrieve, let clients/extensions decide if/how to re-order
| or filter content.
| mjjjjjjj wrote:
| This is pure hypothetical, but I bet Reddit could derive an
| internal reputational number that is a combination of both
| karma (free and potentially farmable) and awards (that people
| actually pay for or that are scarce and shows what they
| value) that would be a better signal to noise ratio than just
| karma alone.
| pixl97 wrote:
| So a wealthy bot farmer rules this system?
| fblp wrote:
| Google search is an example of ude of site reputation (search
| ranking) driven by where backlinks and various other site
| quality metrics.
|
| I would also say the Facebook feed also ranks based on the
| reputation and relevance of the poster of the content.
| pixl97 wrote:
| Is google supposed to be a positive or negative example
| here, that is with all the recent complaints about SEO spam
| and search quality dropping?
| jonny_eh wrote:
| Soon reputation will cost only $8 a month.
| VonGallifrey wrote:
| I don't know why this meme is being repeated so much. I see
| it everywhere.
|
| Twitter Verification is only verifying that the account is
| "authentic, notable, and active".
|
| Nothing about the Verification Process changed. At least I
| have not heard about any changes other then the price change
| from free to $8.
| dragonwriter wrote:
| > Twitter Verification is only verifying that the account
| is "authentic, notable, and active".
|
| Musk has been very clear that it will be open to anyone who
| pays the (increased) cost for Twitter's Blue (which also
| will get other new features), and thus no longer be tier to
| "notable" or "active".
|
| > At least I have not heard about any changes other then
| the price change from free to $8.
|
| Its not a price change from free to $8 for Twitter
| Verification. It is a _discontinuation_ of Twitter
| Verification as a separate thing, but moving the (revised)
| process and resulting checkmark to be an open-to-anyone-
| who-pays component of Blue, which increases in cost to $8
| /mo (currently $4.99/mo).
| billiam wrote:
| The best part of his engrossing Twitter thread is that he inserts
| a multitweet interstitial "ad" for his passion project promoting
| reforestation right in the middle of his spiel.
| baby wrote:
| that's the best approach to growth:
|
| 1. find what's trendy
|
| 2. talk about what's trendy
|
| 3. in the middle or at the end of that, talk about how that
| relates to you and your work
| anigbrowl wrote:
| I'm sure it works across the population at large but I avoid
| doing business with people who engage in that kind of
| manipulation. They're fundamentally untrustworthy in my
| experience.
| CamperBob2 wrote:
| It's the best approach for flipping your bozo bit in the
| minds of most of your readers, but I don't see how that leads
| to "growth."
| EarlKing wrote:
| Yes... the irony of discussing signal-to-noise ratio issues
| (i.e. spam) and then spamming up your own thread. This post
| sponsored by Irony.
| SilasX wrote:
| Maybe it's as some kind of reinforcement of his point about
| policing the lines between spam and non spam?
| dang wrote:
| I know, but:
|
| " _Please don 't pick the most provocative thing in an article
| or post to complain about in the thread. Find something
| interesting to respond to instead._"
|
| https://news.ycombinator.com/newsguidelines.html
| choppaface wrote:
| I find it interesting that dang's post about not just HN
| rules but also his personal feelings about yishan's thread: *
| appear in the same post--- there clearly is no personal
| boundary with dang * direct replies to dang's post, now the
| top of the comments section, are disabled
|
| whenever dang tries to "correct the record" or otherwise
| engage in frivolous squabbles with other HN commenters, I
| really hope this article pops up in the linked material. some
| may argue that yishan here is doing inapropriate self-
| promotion and that might undermine trust in his message. i
| hope HN readers notice how partial dang is, how he's used HN
| technical features ti give his own personal feelings
| privilege, and the financial conflicts of interest here.
| onetimeusename wrote:
| free speech might be self regulating. A place that gets excessive
| spam attracts no one and then there wouldn't be much motivation
| to spam it anymore.
|
| I don't recall spam restrictions on old IRC. A moderator could
| boot you off. My own theory is having an exponential cool off
| timer on posts could be the only thing needed that still is
| technically 100% free speech.
| bombcar wrote:
| IRC had tons of independent little servers doing their own
| thing.
|
| We have huge companies spanning the entire globe; if you get
| banned from one you're out world-wide.
|
| This is where federation can help, IF it truly is a bunch of
| smaller servers rather than ending up one large one.
| aksjdhmkjasdof wrote:
| I have actually worked in this area. I like a lot of Yishan's
| other writing but I find this thread mostly a jumbled mess
| without much insight. Here are a couple assorted points:
|
| >In fact, once again, I challenge you to think about it this way:
| could you make your content moderation decisions even if you
| didn`t understand the language they were being spoken in?
|
| I'm not sure what the big point is here but there are a couple
| parts to how this works in the real world:
|
| 1) Some types of content removal do not need you to understand
| the language: visual content (images/videos), legal takedowns
| (DMCA).
|
| 2) Big social platforms contract with people around the world in
| order to get coverage of various popular languages.
|
| 3) You can use Google Translate (or other machine translation) to
| review content in some languages that nobody working in content
| moderation understands.
|
| But some content that violates the site's policies can easily
| slip through the cracks if it's in the right less-spoken
| language. That's just a cost of doing business. The fact that the
| language is less popular will limit the potential harm but it's
| certainly not perfect.
|
| >Here`s the answer everyone knows: there IS no principled reason
| for banning spam. We ban spam for purely outcome-based reasons: >
| >It affects the quality of experience for users we care about,
| and users having a good time on the platform makes it successful.
|
| Well, that's the same principle that underlies all content
| moderation: "allowing this content is more harmful to the
| platform than banning it". You can go into all the different
| reasons why it might be harmful but that's the basic idea and
| it's not unprincipled at all. And not all spam is banned from all
| platforms--it could just have its distribution killed or even be
| left totally alone, depending on the specific cost/benefit
| analysis at play.
|
| You can apply the same reasoning to every other moderation
| decision or policy.
|
| The main thrust of the thread seems to be that content moderation
| is broadly intended to ban negative behavior (abusive language
| and so on) rather than to censor particular political topics. To
| that I say, yeah, of course.
|
| FWIW I do think that the big platforms have taken a totally wrong
| turn in the last few years by expanding into trying to fight
| "disinformation" and that's led to some specific policies that
| are easily seen as political (eg policies about election fraud
| claims or covid denialism). If we're just talking about staying
| out of this business then sure, give it a go. High-level
| blabbering about "muh censorship!!!" without discussion of
| specific policies, is what you get from people like Musk or
| Sacks, though, and that's best met with an eye roll.
| Waterluvian wrote:
| If I wanted quality content, I would just do the Something Awful
| approach and charge $x per account.
|
| If I wanted lots of eyeballs (whether real or fake) to sell ads,
| I would just pay lip service to moderation issues, while focusing
| on only moderating anything that affects my ability to attract
| advertisers.
|
| But what I want, above all, because I think it would be hilarious
| to watch, is for Elon to activate Robot9000 on all of Twitter.
| onetimeusename wrote:
| Robot9000 really didn't improve quality in the places it was
| deployed though and people just game it.
|
| edit: that said I think Something Awful arguably has the best
| approach to this does it not? The site is over 20 years old at
| this point. That is absolutely ancient compared to all the
| public message forums that exist.
| Waterluvian wrote:
| I agree. I think the SA approach is the best I've ever seen.
| But as I'm flippantly pointing out: it only works if you
| really only care about fostering quality social interaction.
|
| The mistake SA is making is not fixating on revenue as the
| chief KPI. ;)
| idiotsecant wrote:
| SA is also not the font of internet culture that it once was,
| either, so clearly the price of admission is not sufficient
| to make it successful. It seems to me it was, at most, a
| partial contributor.
| onetimeusename wrote:
| I think it's an interesting argument about what SA is now.
| I hear membership is growing again. It has a dedicated
| group there. I think that's what's most interesting. That's
| really not much different than a Reddit board in theory.
| But Reddit boards seem to come and go constantly and suffer
| from all sorts of problems. I am not a redditor but SA
| seems like a better board than a specific sub reddit.
|
| My point is that maybe what SA is now is the best you can
| hope for on the internet, and it's going strong(?).
| Apocryphon wrote:
| Also, SA has "underperformed" as an internet platform-
| Lowtax notoriously failed to capitalize on the community
| and grow it into something bigger (and more lucrative). So
| it remains a large old-school vBulletin-style internet
| forum instead of a potential Reddit or even greater, albeit
| with its culture and soul intact.
| Waterluvian wrote:
| Not suggesting you meant it this way, but there's an
| amusing "money person with blinders on" angle to the
| statement. It's the "what's the point of anything if
| you're not making money?!"
| anonymid wrote:
| Isn't it inconsistent to both say "moderation decisions are about
| behavior, not content", and "platforms can't justify moderation
| decisions because of privacy reasons".
|
| It seems like you wouldn't need to reveal any details about the
| content of the behavior, but just say "look, this person posted X
| times, or was reported Y times", etc... I find the author to be
| really hand-wavy around why this part is difficult.
|
| I work with confidential data, and we track personal information
| through our system and scrub it at the boundaries (say, when
| porting it from our primary DB to our systems for monitoring or
| analysis). I know many other industries (healthcare, education,
| government, payments) face very similar issues...
|
| So why don't any social network companies already do this?
| ranger207 wrote:
| For one, giving specific examples gives censured users an
| excuse to do point-by-point rebuttals. In my experience, point-
| by-point rebuttals are one of the behaviors that should be
| considered bad behavior and moderated against because they
| encourage the participant to think only of each point
| individually and ignore the superlinear effect of every point
| taken together. For another, the user can latch on specific
| examples that seem innocuous out of context and allow them to
| complain that their censorship was obviously heavy handed, and
| if the user is remotely well known then it's famous person's
| word versus random other commenters trying to add context. The
| ultimate result is that people see supposed problems with
| moderation far more than anyone ever says "man I sure am glad
| that user's gone" so there's a general air of resentment
| against the moderation and belief in its ineffectiveness
| mikkergp wrote:
| I would guess part of the problem is the the more specific the
| social media company gets, the more nit picky the users get.
| ptero wrote:
| This topic was adjacent to the sugar and L-isomer comments. Which
| probably influenced my viewpoint:
|
| Yishan is saying that Twitter (and other social networks)
| moderate bad behavior, not bad content. They just strive for
| higher SNR. It is just that specific types of content seems to be
| disproportionately responsible for starting bad behavior in
| discussions; and thus get banned. Sounds rational and while
| potentially slightly unfair looks totally reasonable for a
| private company.
|
| But what I think is happening is that this specific moderation on
| social networks in general and Twitter in particular has pushed
| them along the R- (or L-) isomer path to an extent that a lot of
| content, however well presented and rationally argued, just
| cannot be digested. Not because it is objectively worse or leads
| into a nastier state, but simply because deep inside some
| structure is pointing in the wrong direction.
|
| Which, to me, is very bad. Once you reach this state of mental R-
| and L- incompatibility, no middle ground is possible and the
| outcome is decided by an outright war. Which is not fun and
| brings a lot of causalties. My 2c.
| wwweston wrote:
| > a lot of content, however well presented and rationally
| argued, just cannot be digested.
|
| Can you name the topic areas where even cautious presentation
| will not be sustained on twitter?
| vanjajaja1 wrote:
| This was my takeaway as well. Yishan is arguing that social
| media companies aren't picking sides they're just aiming for a
| happy community, but the end result of that is that the loudest
| and angriest group(s) end up emotionally bullying the
| moderation algorithm into conforming. This is precisely the
| problem that Elon seems to be tackling.
| RickJWagner wrote:
| Reddit is a sewer. I don't think the Ex-CEO has demonstrated any
| moderation skills.
| LegitShady wrote:
| He was CEO of a company that has volunteer moderators, what he
| knows about handling moderation is tainted by the way reddit is
| structured. Also, reddit's moderation is either heavy handed or
| totally ineffective depending on the case so not sure he's the
| right person to talk to.
|
| Also, I stopped reading when he did an ad break on a twitter
| thread. Who needs ads in twitter threads? It makes him seem
| desperate and out of touch. Nobody needs his opinion, and they
| need his opinion with ad breaks even less.
| protoman3000 wrote:
| I like the idea that you don't want to moderate content, but
| behavior. And it let me to these thoughts. I'm curious about your
| additions to these thoughts.
|
| Supply moderation of psychoactive agents never worked. People
| have a demand to alter the state of their consciousness, and we
| should try to moderate demand in effective ways. The problem is
| not the use of psychoactive agents, it is the abuse. And the same
| applies to social media interaction which is a very strong
| psychoactive agent [1]. Nevertheless it can be useful. Therefore
| we want to fight abuse, not use.
|
| I would like to put up to discussion the usage and extension of
| techniques for demand moderation in the context of social media
| interactions which we know to somewhat work already in other
| psychoactive agents. Think something like drugs education in
| schools, fasting rituals, warning labels on cigarettes, limited
| selling hours for alcohol, trading food stamps for drug addicts
| etc.
|
| For example, assuming the platform could somehow identify abusive
| patterns in the user, it could
|
| - show up warning labels that their behavior might be abusive in
| the sense of social media interaction abuse
|
| - give them mandatory cool-down periods
|
| - trick the allostasis principle of their dopamine reward system
| by doing things intermittently, e.g. by only randomly letting
| their posts to go through to other users, or only randomly allow
| them to continue reading the conversation (maybe only for some
| time), or only randomly shadow ban some posts
|
| - make them read documents about harmful social media interaction
| abuse
|
| - hint to them how abusive patterns in other people look like
|
| - give limited reading or posting credits (e.g. "Should I
| continue posting in this flamewar thread and then not post
| somewhere else where I find it more meaningful at another time?")
|
| - etc.
|
| I would like to hear your opinions about this in a sensible
| discussion.
|
| _________
|
| [1] Yes, social media interaction is a psychoactive and addictive
| agent, just like any other drug or your common addiction like
| overworking yourself, but I digress. People use social media
| interactions to among others raise their anger, to feed their
| addiction to complaining, to feel a high of "being right"/owning
| it up to the libs/nazis/bigots/idiots etc., to feel like they
| learned something useful, to entertain themselves, to escape from
| reality etc. Many people suffer from compulsively or at least
| habitual abuse of social media interactions, which has been shown
| by numerous studies (Sorry, to lazy to find a paper now to cite).
| Moreover the societal effects of abuse of social media
| interactions and their dynamics and influence on democratic
| politics are obviously detrimental.
| idiotsecant wrote:
| Maybe this works on a long enough timeline, but by your analogy
| entire generations of our population are now hopelessly
| addicted to this particular psychoactive agent. We might be
| able to make a new generation that is immune to it, but in the
| mean time these people are strongly influencing policy,
| culture, and society in ways that are directly based on that
| addiction. This is a 'planting trees I know I will never feel
| the shade of' situation.
| carapace wrote:
| > working on climate: removing CO2 from the atmosphere is
| critical to overcoming the climate crisis, and the restoration of
| forests is one of the BEST ways to do that.
|
| As a tangent, Akira Miyawaki has developed a method for
| 'reconstitution of "indigenous forests by indigenous trees"'
| which "produces rich, dense and efficient protective pioneer
| forests in 20 to 30 years"
|
| https://en.wikipedia.org/wiki/Akira_Miyawaki#Method_and_cond...
|
| It's worth quoting in full:
|
| > Rigorous initial site survey and research of potential natural
| vegetation
|
| > Identification and collection of a large number of various
| native seeds, locally or nearby and in a comparable geo-climatic
| context
|
| > Germination in a nursery (which requires additional maintenance
| for some species; for example, those that germinate only after
| passing through the digestive tract of a certain animal, need a
| particular symbiotic fungus, or a cold induced dorming phase)
|
| > Preparation of the substrate if it is very degraded, such as
| the addition of organic matter or mulch, and, in areas with heavy
| or torrential rainfall, planting mounds for taproot species that
| require a well-drained soil surface. Hill slopes can be planted
| with more ubiquitous surface roots species, such as cedar,
| Japanese cypress, and pine.
|
| > Plantations respecting biodiversity inspired by the model of
| the natural forest. A dense plantation of very young seedlings
| (but with an already mature root system: with symbiotic bacteria
| and fungi present) is recommended. Density aims at stirring
| competition between species and the onset of phytosociological
| relations close to what would happen in nature (three to five
| plants per square metre in the temperate zone, up to five or ten
| seedlings per square metre in Borneo).
|
| > Plantations randomly distributed in space in the way plants are
| distributed in a clearing or at the edge of the natural forest,
| not in rows or staggered.
| atchoo wrote:
| I think you have to be quite credulous to engage in this topic of
| "twitter moderation" as if it's in good faith. It's not about
| about creating a good experience for users, constructive debate
| or even money. It's ALL about political influence.
|
| > I`m heartened to know that @DavidSacks is involved.
|
| I'm not. I doubt he is there because Twitter is like Zenefits,
| it's because his preoccupation over the last few years has been
| politics as part of the "New Right" Thiel, Master, Vance etc.
| running fund raisers for DeSantis and endorsing Musk's pro-
| Russian nonsense on Ukraine.
|
| https://newrepublic.com/article/168125/david-sacks-elon-musk...
| drewbeck wrote:
| Very helpful and unfortunate context, thanks
| mikkergp wrote:
| Yeah, I saw David Sacks tweet:
|
| " The entitled elite is not mad that they have to pay $8/month.
| They're mad that anyone can pay $8/month."
|
| There must be quite a few people in here who are well versed in
| customer relations, at least in the context of a startup, can
| anyone explain to me why Musk and Sacks seem to have developed
| the strategy of insulting their customers and potential
| customers?
|
| I can think of two reasons
|
| 1. They think twitter has a big enough most of obsessed people
| that they can het away with whatever they want.
|
| 2. They think that there really is a massive group of angry
| "normies" they can rile up to pay $8 a month for twitter blue,
| but isn't ironically the goal of twitter blue to get priority
| access to the "anointed elite"? For sure I'm not paying $8 a
| month to get access to the feeds of my friends and business
| associates.
|
| David Sacks' tweet does feel very Trumpian in a way though,
| which supports the notion of bringing trump back and starting
| the free speech social network.
| atchoo wrote:
| I think their general plan will be to discourage/silence
| influential left-wing voices with enough cover to keep the
| majority of the audience for an emboldened right-wing.
|
| If thinking imaginatively, then the proposal framed as "$8/mo
| or have your tweets deranked" is a deal they actively don't
| want left-wingers to take. They want to be able to derank
| their tweets with a cover of legitimacy.
|
| The more they can turn this fee into a controversial "I
| support Musk" loyalty test, the more they can discourage
| left-wing / anti-Musk subscribers while encouraging right-
| wing / pro-Musk subscribers who will all have their tweets
| boosted.
|
| Feels conspiratorial but it's a fee that mostly upsets
| existing blue tick celebrities which _should_ be the last
| group Twitter The Business would want to annoy but they are
| the influential left-wingers. If you look at who Musk picked
| fights with about it e.g. AOC and Stephen King, then that is
| even more suggestive of deliberate provocation.
|
| Whether planned or not, I suspect that this is how it play
| out.
| MetaWhirledPeas wrote:
| > _can anyone explain to me why Musk and Sacks seem to have
| developed the strategy of insulting their customers and
| potential customers?_
|
| I find it fascinating that so many people are whip-crack
| quick to loudly criticize the $8 checkmark move.
|
| How many of these critics even use Twitter?
|
| And of those who do use Twitter, how can any of them know the
| outcome of such a move? Why not just wait and observe?
| mikkergp wrote:
| I think because Elon and Co. are acting so dismissive and
| entitled. They're acting frickin weird. Admittedly I think
| who you think sounds more entitled depends on your
| worldview. I do think the journalist reactions are strange,
| but probably just because they're acting to something so
| strange.
|
| Elon is hardly describing a vision for this new version of
| twitter that people might be inspired to spend $8 for, yes
| something vague about plebs vs nobility, and half has many
| ads, but his biggest call to action has been "Hey we need
| the money". They're acting so shitty to everyone it's
| hardly a surprise people aren't fawning in confidence back.
| Plus I can't help but feel that these people are really
| just echoing what everyone else is thinking. Why am I
| paying $8 a month for Twitter?
| dragonwriter wrote:
| > Elon is hardly describing a vision for this new version
| of twitter that people might be inspired to spend $8 for,
| yes something vague about plebs vs nobility,
|
| Yeah, Elon calls the status quo a "lords & peasants
| system" and says that to get out of that model Twitter
| should have a two-tier model where the paid users get
| special visual flair, algorithmic boosts in their tweets
| prominence and reach, and a reduced-ads feed experience
| compared to free users.
|
| And somehow doesn't see the irony.
| ZeroGravitas wrote:
| Aside from the weird elite baiting rhetoric, does this mean
| that blue checkmark no longer means "yes, this is that famous
| person/thing you've heard of, not an impersonator" but now
| just means "this person gave us 8 dollars?"
| mikkergp wrote:
| I don't know if this has been clear yet. Or less than
| famous person/thing, do you have to submit ID / use real
| name to get verified.
| lazyeye wrote:
| I think they are arguing its about removing political influence
| and restoring balance.
|
| This will be seen as an "attack on the left" because Twitter
| has been controlled by the left till now.
| boredhedgehog wrote:
| You are showcasing exactly the behavior he blames for the
| degeneration of dialogue: take any kind of question and turn it
| into a political us vs them.
| atchoo wrote:
| When the context of the discussion is twitter moderation in
| the wake of Musk's takeover and who his team is, it's already
| political. For Yishan to pump up Sacks and his confidence in
| him to fix moderation, without acknowledging that today he is
| a political operator, is close to dishonest. Contributing
| this information is hopefully helpful.
| gort19 wrote:
| Isn't that what Sacks is doing when he talks about the
| 'entitled elite' being 'mad'?
| deckard1 wrote:
| I did not see any mention of structure.
|
| Reddit has a different structure than Twitter. In fact, go back
| to before Slashdot and Digg and the common (HN, Reddit) format of
| drive-by commenting was simply not a thing. Usenet conversations
| would take place over the course of days, weeks, or even months.
|
| Business rules. Twitter is driven by engagement. Twitter is
| practically the birthplace of the "hot take". It's what drives a
| lot of users to the site and keeps them there. How do you control
| the temper of a site when your _goal_ is inflammatory to begin
| with?
|
| Trust and Good Faith. When you enter into a legal contract, both
| you and the party you are forming a contract with are expected to
| operate in _good faith_. You are signaling your intent is to be
| fair and honest and to uphold the terms of the contract. Now, the
| elephant in the room here is what happens when the CEO, Elon
| Musk, could arguably (Matt Levine has done so, wonderfully) not
| even demonstrate good faith during the purchase of Twitter,
| itself. Or has been a known bully to Bill Gates regarding his
| weight or sex appeal, or simply enjoys trolling with conspiracy
| theories. What does a moderation system even mean in the context
| of a private corporation owned by such a person? Will moderation
| apply to Elon? If not, then how is trust established?
|
| There is a lot to talk about on that last point. In the late '90s
| a site called Advogato[1] was created to explore trust metrics.
| It was not terribly successful, but it was an interesting time in
| moderation. Slashdot was also doing what they could. But then it
| all stopped with the rise of corporate forums. Corporate forums,
| such as Reddit, Twitter, or Facebook, seem to have no interest in
| these sorts of things. Their interest is in conflict: they need
| to onboard as many eyeballs as possible, as quickly as possible,
| and with as little user friction as possible. They also serve
| advertisers, who, you could argue, are the _real_ arbiters of
| what can be said on a site.
|
| [1] https://en.wikipedia.org/wiki/Advogato
| jmyeet wrote:
| This is a good post.
|
| I'm one of those who likes to bring out the "fire in a theater"
| or doxxing as the counterexample to disprove literally nobody is
| a free speech absolutist. This on top of it not being a 1A issue
| anyway because the first five words are "Congress shall pass no
| law".
|
| But spam is a better way to approach this and show it really
| isn't a content problem but a user behaviour problem. Because
| that's really it.
|
| Another way to put this is that the _total experience_ matters,
| meaning the experience of all users: content creators, lurkers
| _and advertisers_. Someone could go into an AA meeting and not
| shut up about scientology or coal power and you 'll get kicked
| out. Not because your free speech is being violated but because
| you're annoying and you're worsening the experience of everyone
| else you come in contact with.
|
| Let me put it another way: just because you have a "right" to say
| something doesn't mean other people should be forced to hear it.
| That platform has a greater responsibility that your personal
| interests and that's about behaviour (as the article notes), not
| content.
|
| As this thread notes, this is results-oriented.
| spaceman_2020 wrote:
| At how many tweets in the thread do you just go "maybe I should
| just write this as a blog post?"
| [deleted]
| whatshisface wrote:
| He says there is no principled reason to ban spam, but there's an
| obvious one, it isn't really speech. The same goes for someone
| who posts the same opinion everywhere with no sense of contextual
| relevance. That's not real speech, it's just posting.
| ch4s3 wrote:
| It's basically just public nuisance, like driving up and down a
| street blaring your favorite club banger at 3AM. More
| uncharitably its a lot like littering, public urination, or
| graffiti.
| mr_toad wrote:
| Spammers are also using a public space for their own selfish
| gain, which makes them freeloaders.
| ch4s3 wrote:
| That's sort of what I mean. It's like putting up a
| billboard on someone else's property. Taking down the
| billboard isn't about the content of the billboard but
| rather the non-permitted use of the space.
| puffoflogic wrote:
| TL;DR: Run your platform to confirm to the desires of the loudest
| users. Declare anything your loudest users don't want to see to
| be "flamewar" content and remove it.
|
| My take: "Flamebait" _is_ a completely accurate label for the
| content your loudest users don 't want to see, but it's by
| definition your loudest users who are actually doing the flaming,
| and by definition they disagree with the things they're flaming.
| So all this does is reward people for flamewars, while the
| moderators effectively crusade on behalf of the flamers. But it's
| "okay" because, by definition, the moderators are going to be
| people who agree with the political views of the loudest viewers
| (if they weren't they'd get heckled off), so the mods you
| actually get will be perfectly happy with this situation. Neither
| the mods nor the loudest users have any reason to dislike or see
| any problem with this arrangement. So why is it a problem?
| Because it leads to what I'll call a flameocracy: whoever flames
| loudest gets their way as the platform will align with their
| desires (in order to reduce how often they flame). The mods and
| the platform are held hostage by these users but are suffering
| literal Stockholm Syndrome as they fear setting off their abusers
| (the flamers).
| kalekold wrote:
| I wish we could all go back to phpBB forums. Small, dedicated,
| online communities were great. I can't remember massive problems
| like this back then.
| sciencemadness wrote:
| The bad actors were much less prevalent back in the heyday of
| small phpBB style forums. I have run a forum of this type for
| 20 years now, since 2002. Around 2011 was when link spam got
| bad enough that I had to start writing my own bolt-on spam
| classifier and moderation tools instead of manually deleting
| spammer accounts. Captchas didn't help because most of the spam
| was posted by actual humans, not autonomous bots.
|
| In the past 2 years fighting spam became too exhausting and I
| gave up on allowing new signups through software entirely. Now
| you have to email me explaining why you want an account and
| I'll manually create one for the approved requests. The world's
| internet users are now more numerous and less homogeneous than
| they were back when small forums dominated, and the worst 0.01%
| will ruin your site for the other 99.99% unless you invest a
| lot of effort into prevention.
| pixl97 wrote:
| Yep, if you're on the internet long enough you'll remember
| the days before you were portscanned constantly. You'll
| remember the days before legions of bots hammered at your
| HTTP server. You'd remember it was rare to have some kiddie
| DDOS your server off the internet and you had to hide behind
| a 3rd party provider like cloudflare.
|
| That internet is long dead, hence discussions like Dead
| Internet Theory.
| carapace wrote:
| My mom still has a land-line. She gets multiple calls a
| day, robots trying to steal an old lady's money. For this
| we invented the telephone? the transistor?
| pixl97 wrote:
| Honestly the internet has a lot to do with this problem
| too, stealing VOIP accounts and spoofing caller ID has
| enabled a lot of this.
| joshstrange wrote:
| That's because they were small and often has strict rules
| (written or not), aka moderation, about how to behave. You
| don't remember massive problems because the bad actors were
| kicked off. It falls apart at scale and when everyone
| can't/won't agree on "good behavior" or "the rules" is/are.
| root_axis wrote:
| phpBB forums have always been notorious for capricious bans
| based on the whims of mods and admins, it's just that getting
| banned from a website wasn't newsworthy 10 years ago.
| PathOfEclipse wrote:
| Re: "Here`s the answer everyone knows: there IS no principled
| reason for banning spam."
|
| The author is making the mistake that "free speech" has been
| about saying whatever you want and whenever you want. This was
| never the case, including at the time of the _founding_ of the
| U.S. constitution. There has always been a tolerance window which
| defines what you can say and what you can 't say without
| repercussions, often and usually enforced by society and societal
| norms.
|
| The 1st amendment was always about limiting what the government
| can do to curtail speech, but, as we know, there are plenty of
| other actors in the system that have and continue to moderate
| communications. The problem with society today is that those in
| power have gotten really bad at defining a reasonable tolerance
| window, and in fact, political actors have worked hard to _shift_
| the tolerance window to benefit them and harm their opponents.
|
| So, he makes this mistake and then builds on it by claiming that
| censoring spam violates free speech principles, but that's not
| really true. And then he tries to equate controversy with spam,
| saying it's not so much about the content itself but how it
| affects users. And that, I think leads into another major problem
| in society.
|
| There has always been a tension between someone getting
| reasonably versus unreasonably offended by something. However, in
| today's society, thanks in part to certain identitarian
| ideologies, along with a culture shift towards the worship or
| idolization of victimhood, we've given _tremendous_ power to a
| few people to shut down speech by being offended, and vastly
| broadened what we consider reasonable offense versus unreasonable
| offense.
|
| Both of these issues are ultimately cultural, but, at the same
| time, social media platforms have enough power to influence
| culture. If the new Twitter can define a less insane tolerance
| window and give more leeway for people to speak even if a small
| but loud or politically motivated minority of people get
| offended, then they will have succeeded in improving the culture
| and in improving content moderation.
|
| And, of course, there is a third, and major elephant in the room.
| The government has been caught collaborating with tech companies
| to censor speech indirectly. This is a concrete violation of the
| first amendment, and, assuming Republicans gain power this
| election cycle, I hope we see government officials prosecuted in
| court over it.
| slowmovintarget wrote:
| I think that's a mischaracterization of what was written about
| spam.
|
| The author wrote that most people don't consider banning spam
| to be free speech infringement because the act of moderating
| spam has nothing to do with the content and everything to do
| with the posting behavior in the communication medium.
|
| The author then uses that point to draw logical conclusions
| about other moderation activity.
|
| Leading with a strawman weakens your argument, I think.
| PathOfEclipse wrote:
| Fortunately it's not a strawman. From the article:
|
| =====
|
| Moderating spam is very interesting: it is almost universally
| regarded as okay to ban (i.e. CENSORSHIP) but spam is in no
| way illegal.
|
| Spam actually passes the test of "allow any legal speech"
| with flying colors. Hell, the US Postal Service delivers spam
| to your mailbox. When 1A discussions talk about free speech
| on private platforms mirroring free speech laws, the
| exceptions cited are typically "fire in a crowded theater" or
| maybe "threatening imminent bodily harm." Spam is nothing
| close to either of those, yet everyone agrees: yes, it`s okay
| to moderate (censor) spam.
|
| =====
|
| He's saying directly that censoring spam is not supported by
| any free speech principle, at least as he sees it, and in
| fact our free speech laws allow spam. He also refers to the
| idea of "allow any legal speech" as the "free-speech"-based
| litmus test for content moderation, and chooses spam to show
| how this litmus test is insufficient.
|
| What about my framing of his argument is a strawman? it looks
| like a flesh-and-blood man! I am saying that his litmus test
| is an invalid or inaccurate framing, of what a platform that
| supports free speech should be about. Even if the government
| is supposed to allow you to say pretty close to whatever you
| want whenever you want, it's never been an expectation that
| private citizens have to provide the same support.
| Individuals, institutions, and organizations have always
| limited speech beyond what the government could enforce.
| Therefore, "free speech" has never meant that you could say
| whatever is legal and everyone else will just go along with
| it.
|
| On the other hand, Elon Musk's simple remark of saying that
| he knows he's doing a good job if both political extremes are
| equally offended shows to me that he seems to understand free
| speech in practice better than this ex-Reddit CEO does!
| (https://www.quora.com/Elon-Musk-A-social-media-platform-s-
| po...)
| slowmovintarget wrote:
| For the record, I agree with your points in your original
| post regarding the nature of free speech and with regard to
| the Overton window for tolerable speech (if there is such a
| thing).
|
| I disagree with the notion that Yishan made a mistake in
| how he wrote about spam. You used that as a basis for
| disclaiming his conclusions.
|
| Yishan was not making a point about free speech, he was
| making the point that effective moderation is not about
| free speech at all.
| PathOfEclipse wrote:
| That's a fair point. At the same time:
|
| A) saying moderation is not about free speech is, I
| think, making a point about free speech. Saying one thing
| is unrelated to another is making a point about both
| things.
|
| B) Even framed this way, I think Yishan is either wrong
| or is missing the point. If you want to do content
| moderation that better supports free speech, what does
| that look like? I think Yishan either doesn't answer that
| question at all, or else implies that it's not solvable
| by saying the two are unrelated. I don't think that's the
| case, and I also think his approach of focusing less on
| the content and more on the supposed user impact just
| gives more power to activists who know how to use outrage
| as a weapon. If you want your platform to better support
| free speech, then I think the content itself should
| matter as much or more than peoples' reaction to it, even
| if moderating by content is more difficult. Otherwise,
| content moderation can just be gamed by generating the
| appropriate reaction to content you want censored.
| digitalsushi wrote:
| I can speak only at a Star Trek technobabble level on this, but
| I'd like it if I could mark other random accounts as "friends" or
| "trusted". Anything they upvote or downvote becomes a factor in
| whether I see a post or not. I'd also be upvoting/downvoting
| things, and being a possible friend/trusted.
|
| I'd like a little metadata with my posts, such as how
| controversial my network voted it. The ones that are out of
| calibration, I can view, see their responses, and then I could
| see if my network has changed. It would be nice to click on a
| friend and get a report across months of how similar we vote. If
| we started drift, I can easily cull them and get my feed cleaned
| up.
| cauthon wrote:
| Unfortunately this is antithetical to the advertising-based
| revenue model these sites operate on. There's no incentive for
| the site to relinquish their control over what you see and
| return it to the user.
|
| On an anecdotal level, the fraction of tweets in my feed that
| are "recommended topics" or "liked by people I follow" or
| simply "promoted" has risen astronomically over the past few
| months. I have a pretty curated list of follows (~100), I had
| the "show me tweets in chronological order" box checked back
| when that was an option, and the signal to noise ratio has
| still become overwhelmingly unusable.
| billyjobob wrote:
| Slashdot started introducing features like this 20 years ago.
| We thought "web of trust" would be the future of content
| moderation, but subsequent forums have moved further and
| further away from empowering users to choose what they want to
| see and towards simple top down censorship.
| guelo wrote:
| It's not censorship, it's about optimizing for advertisers
| instead of users. Which means users can't have too much
| control. But since users won't pay advertising is the only
| business model that works.
| rdtwo wrote:
| True you don't want to advertise on anti China stuff (even
| if true)
| rewgs wrote:
| I wish we'd just stopped at "users won't pay" and realized
| that, if people aren't willing to pay for it, maybe it's
| not a product worth building.
| bombcar wrote:
| Web of trust works _too well_ and unfortunate ideas can be
| entertained by someone you trust, which is a no-no once you
| have top-down ideas.
|
| Almost everyone has someone they trust so much that even the
| most insane conspiracy theory would be entertained at least
| for awhile if that trusted person brought it up.
| rdtwo wrote:
| I mean remember when the wuhan lab thing was a total
| conspiracy theory. Or when masks were supposedly (not
| airborne) not needed and you just had to wash your hands
| more. All sorts of fringe stuff sometimes turns out to be
| true. But you know sometimes you get pizzagate but it's the
| price we pay.
| TMWNN wrote:
| >Almost everyone has someone they trust so much that even
| the most insane conspiracy theory would be entertained at
| least for awhile if that trusted person brought it up.
|
| What's wrong with that?
|
| If someone you trust brings up a crazy idea, it _should_ be
| considered. Maybe for a day, maybe for an hour, maybe for a
| half second, but it shouldn 't be dismissed immediately no
| matter what.
| SV_BubbleTime wrote:
| Can't have people being exposed to diversity of thought!
| Not by people they like, what if someone is wrong!
|
| ... I'm all about low hanging fruit in moderation, and not
| trying to fix human behavior. I'll keep waiting to see when
| that is back in vogue.
| Melatonic wrote:
| Very interesting idea and I think this could definitely be
| useful. Then again users could just create new accounts.
| robocat wrote:
| I would frame it more the opposite: I can easily see comments
| that are worthless to me, and I want to automatically downvote
| any comments by the same user (perhaps needs tags: there are
| users whose technical opinions are great, but I'm not
| interested at all in their jokes or politics).
|
| One problem with your idea is that moderation votes are
| anonymous, but ranking up positively or negatively depending on
| another users votes would allow their votes to be deanonymised.
|
| Perhaps adding in deterministic noise would be enough to offset
| that? Or to prevent deanonimisation you need a minimum number
| of posting friends?
|
| In fact I would love to see more noise added to the HN comments
| and a factor to offset early voting. Late comments get few
| votes because they have low visibility, which means many great
| comments don't bubble to the top.
| mjamesaustin wrote:
| In this thread, Yishan promotes an app called Block Party
| that lets you block anyone, or even all accounts that liked
| or retweeted something.
|
| https://twitter.com/blockpartyapp_
| bombcar wrote:
| BlockParty is almost the definition of echo chamber
| construction.
| wormslayer666 wrote:
| I got my first experience in running a small-medium sized (~1000
| user) game community over the past couple years. This is mostly
| commentary on running such a community in general.
|
| Top-level moderation of any sufficiently cliquey group (i.e. all
| large groups) devolves into something resembling feudalism. As
| the king of the land, you're in charge of being just and meting
| out appropriate punishment/censorship/other enforcement of rules,
| as well as updating those rules themselves. Your goal at the end
| of the day is continuing to provide support for your product,
| administration/upkeep for your gaming community, or whatever else
| it was that you wanted to do when you created the platform in
| question. However, the cliques (whether they be friend groups,
| opinionated but honest users, actual political camps, or any
| other tribal construct) will always view your actions through a
| cliquey lens. This will happen no matter how clear or consistent
| your reasoning is, unless you fully automate moderation (which
| never works and would probably be accused of bias by design
| anyways).
|
| The reason why this looks feudal is because you still must curry
| favor with those cliques, lest the greater userbase eventually
| buys into their reasoning about favoritism, ideological bias, or
| whatever else we choose to call it. At the end of the day, the
| dedicated users have _much_ more time and energy to argue, or
| propagandize, or skirt rules than any moderation team has to
| counteract it. If you 're moderating users of a commercial
| product, it hurts your public image (with some nebulous impact on
| sales/marketing). If you're moderating a community for a game or
| software project, it hurts the reputation of the community and
| makes your moderators/developers/donators uneasy.
|
| The only approach I've decided unambiguously works is one that
| doesn't scale well at all, and that's the veil of secrecy or
| "council of elders" approach which Yishan discusses. The king
| stays behind the veil, and makes as few public statements as
| possible. Reasoning is only given insofar as is needed to explain
| decisions, only responding directly to criticism as needed to
| justify actions taken anyways. Trusted elites from the userbase
| are taken into confidence, and the assumption is that they give a
| marginally more transparent look into how decisions are made, and
| that they pacify their cliques.
|
| Above all, the most important fact I've had to keep in mind is
| that the outspoken users, both those legitimately passionate as
| well as those simply trying to start shit, are a tiny minority of
| users. Most people are rational and recognize that
| platforms/communities exist for a reason, and they're fine with
| respecting that since it's what they're there for. When
| moderating, the outspoken group is nearly all you'll ever see.
| Catering to passionate, involved users is justifiable, but must
| still be balanced with what the majority wants, or is at least
| able to tolerate (the "silent majority" which every demagogue
| claims to represent). That catering must also be done carefully,
| because "bad actors" who seek action/change/debate for the sake
| of stoking conflict or their own benefit will do their best to
| appear legitimate.
|
| For some of this (e.g. spam), you can filter it comfortably as
| Yishan discusses without interacting with the content. However,
| more developed bad actor behavior is really quite good at
| blending in with legitimate discussion. If you as king recognize
| that there's an inorganic flamewar, or abuse directed at a user,
| or spam, or complaint about a previous decision, you have no
| choice but to choose a cudgel (bans, filters, changes to rules,
| etc) and use it decisively. It is only when the king appears weak
| or indecisive (or worse, absent) that a platform goes off the
| rails, and at that point it takes immense effort to recover it
| (e.g. your C-level being cleared as part of a takeover, or a
| seemingly universally unpopular crackdown by moderation). As a
| lazy comparison, Hacker News is about as old as Twitter, and any
| daily user can see the intensive moderation which keeps it going
| despite the obvious interest groups at play. This is in spite of
| the fact that HN has _less_ overhead to make an account and begin
| posting, and seemingly _more_ ROI on influencing discussion (lots
| of rich /smart/fancy people _post_ here regularly, let alone
| read).
|
| Due to the need for privacy, moderation fundamentally cannot be
| democratic or open. Pretty much anyone contending otherwise is
| just upset at a recent decision or is trying to cause trouble for
| administration. Aspirationally, we would like the general
| _direction_ of the platform to be determined democratically, but
| the line between these two is frequently blurry at best. To avoid
| extra drama, I usually aim to do as much discussion with users as
| possible, but ultimately perform all decisionmaking behind closed
| doors -- this is more or less the "giant faceless corporation"
| approach. Nobody knows how much I (or Elon, or Zuck, or the guys
| running the infinitely many medium-large discord servers)
| actually take into account user feedback.
|
| I started writing this as a reply to paradite, but decided
| against that after going far out of scope.
| blfr wrote:
| > Because it is not TOPICS that are censored. It is BEHAVIOR.
|
| > (This is why people on the left and people on the right both
| think they are being targeted)
|
| An enticing idea but simply not the case for any popular existing
| social network. And it's triply not true on yishan's reddit which
| both through administrative measures and moderation culture
| targets any and all communities that do not share the favoured
| new-left politics.
| gambler wrote:
| He is half-correct, but not in a good way. When people on the
| left say something that goes against new-left agenda, they get
| suppressed too. That is not a redeeming quality of the system
| or an indicator of fairness. It simply shows that the ideology
| driving moderation is even more narrow-minded and intolerant of
| dissent than most observers assume at first sight.
|
| At the same time, it's trivial to demonstrate that YouTube and
| Twitter (easy examples) primarily target conservatives with
| their "moderation". Just look at who primarily uses major
| alternative platforms.
| AhmedF wrote:
| > it's trivial to demonstrate that YouTube and Twitter (easy
| examples) primarily target conservatives with their
| "moderation".
|
| Terrible take.
|
| Actual data analysis shows that at _worst_ conservatives are
| moderated equally, and at best, less than non-conservatives.
|
| Here's something to chew on: https://forward.com/fast-
| forward/423238/twitter-white-nation...
| archagon wrote:
| Or consider that perhaps the right in particular tends to
| harbor and support people who lean more towards
| disinformation, hate speech, and incitement to violence.
| [deleted]
| [deleted]
| pj_mukh wrote:
| I mean that's the claim, the counter-claim would require a
| social network banning _topics_ and not behavior. Note: As a
| user you can see topics, you can 't see behavior. The fact that
| some users flood other users' DM's is not visible to all users.
| So how do you know?
|
| "I don't trust left-y CEO's", is a fair enough answer, but
| really that's where the counter-claim seems to end. Now that we
| have a right-wing CEO, looks like the shoe is on the other
| foot[1]
|
| [1]
| https://twitter.com/AOC/status/1588174959817658368?s=20&t=F9...
| luckylion wrote:
| > As a user you can see topics, you can't see behavior.
|
| True, but not really a good argument for the "trust us, this
| person needed banning, no we will not give any details"-style
| of moderation that most companies have applied so far. And
| you can see topics, so you'll generally notice when topic are
| being banned, not behavior, because they usually don't align
| perfectly.
| jtolmar wrote:
| All the communist subreddits are in constant hot water with the
| admins. They banned ChapoTrapHouse when it was one of the
| biggest subreddits. When a bunch of moderators tried to protest
| against reddit hosting antivax content, the admins took control
| over those subreddits.
|
| So no, you're just factually wrong here.
| throw_m239339 wrote:
| > All the communist subreddits are in constant hot water with
| the admins.
|
| Yet, the biggest communist sub of all, r/politics is doing
| perfectly fine.
|
| "moderating behavior"? Bullshit, when unhinged redditors are
| constantly accusing conservatives of being "traitors",
| "nazis", there is so much unhinged comments on these subs
| that it clearly demonstrate a general left wing bias when it
| comes to moderation, in favor of the most extreme left.
|
| Chapotraphouse was only banned because they harassed the
| wrong sub, but when it was about harassing people on subs
| deemed not progressives, reddit admins didn't care a bit.
| archagon wrote:
| r/politics is "communist"? That's... just a really dumb
| take. If there is a far-left presence on Reddit, it is not
| prominent. r/politics is left-leaning and angry, but,
| objectively speaking, not really all that extreme.
|
| And, for what it's worth, it seems perfectly reasonable to
| label those who tried to overthrow our democratic
| government "traitors".
| throw_m239339 wrote:
| > r/politics is "communist"? That's... just a really dumb
| take. If there is a far-left presence on Reddit, it is
| not prominent. r/politics is left-leaning and angry, but,
| objectively speaking, not really all that extreme.
|
| Obviously, none of the affluent idiots from chapo or at
| hexbear controlling r/politics r/news or r/worldnews are
| really communists, they are just rich asshats that just
| pretend to be, my point still stands. They are still
| spouting marxist non sense, violent speech, and their
| behavior isn't moderated, as long as they don't target
| "the wrong people".
| ineptech wrote:
| This kind of sentiment always shows up in this kind of
| thread; I think a lot of people don't really grok that
| being far enough to one side causes an unbiased forum to
| appear biased against you. If you hate X enough, Reddit
| and Twitter are going to seem pro-X, regardless of what X
| is.
|
| (And, separately, almost no one who argues about anything
| being "communist" is using the same definition of that
| word as the person they're arguing with, but that's a
| different problem entirely)
| jonwithoutanh wrote:
| ... you ever try and post anything in /r/conservative? you
| get banned. Doesn't matter if you are quoting something
| president Trump had said 1000 times before. You get banned.
| You can't even quote them back to themselves, or ask them a
| question. You get banned.
|
| Persecution fetish much buddy?
| throw_m239339 wrote:
| > Persecution fetish much buddy?
|
| It seems that some people here can't help making
| everything about their sick sexual thrills. And I'm not
| your buddy.
| AhmedF wrote:
| Well at least you're mask-off in regards to being bad
| faith.
| throw_m239339 wrote:
| > Well at least you're mask-off in regards to being bad
| faith.
|
| And you have no arguments, so you resort to personal
| attacks. That makes you a troll.
| Natsu wrote:
| Yeah, the "there's no principled reason to ban spam" is just
| silly. The recipients don't want to see it whereas people cry
| censorship when messages they want to see are blocked.
|
| It's literally the difference between your feed being filtered
| by your choices & preferences and someone else imposing theirs
| upon you.
| archagon wrote:
| I see many more people crying censorship when messages that
| they want _others_ to see are blocked.
| Natsu wrote:
| You must hang out in a very different place, then. I see
| much more outcry when 3rd parties come between willing
| speakers and recipients, with most of the rest being people
| misrepresenting censorship as moderation because it allows
| them to justify it.
|
| In that vein, this went up on HN the other day:
| https://astralcodexten.substack.com/p/moderation-is-
| differen...
| kodt wrote:
| There are plenty of right wing / politically conservative subs.
| Fervicus wrote:
| Funny that you never see them on the front page.
| bnralt wrote:
| Indeed, I've seen several subs put in new rules where certain
| topics aren't allowed to be discussed at all, because the
| administrators told them that the sub would get banned if the
| users went against the beliefs held by the admins (even if the
| admins had a minority opinion when it came to the country as a
| whole).
|
| Then there is just arbitrary or malicious enforcement of the
| rules. /r/Star_Trek was told by admins they would be banned if
| they talked about /r/StarTrek at all, so now that's a topic
| that's no longer allowed in that sub. But there are tons of
| subs set up specifically to talk about other subs, where just
| about all posts are about other subs (such as
| /r/subredditdrama), and the admins never bother them.
|
| I don't think we can have a conversation about moderation when
| people are pretending that the current situation doesn't exist,
| and that moderation is only ever done for altruistic reasons.
| It's like talking about police reform but pretending that no
| police officer has ever done anything wrong and not one of them
| could ever be part of a problem.
| thepasswordis wrote:
| /r/srd and /r/againsthatesubreddits exist with the _explicit_
| purpose of brigading other subs, and yet they are not banned.
| UncleMeat wrote:
| SRD has an explicit "no brigading other subs" rule. How is
| their explicit purpose brigading other subs?
| Manuel_D wrote:
| Kiwifarms has an explicit "don't interact with the
| subjects" rule. Does that mean it never happens?
| UncleMeat wrote:
| Brigading absolutely happens in SRD. We can talk about
| whether this style of content should exist, but it does
| not "exist with the explicit purpose of brigading other
| subs."
| Manuel_D wrote:
| Right, it exists with the tacit purpose of brigading
| other subs. But like Kiwifarms, blurbs in the site rules
| mean nothing given the context of the community.
| thepasswordis wrote:
| "Hey guys no brigading okay? ;-)" followed by a page
| which directly links to threads for people to brigade.
|
| They don't even bother to use the np.reddit "no
| participation" domain. Most other subs don't even allow
| you to _link_ outside the sub, because they 've been
| warned by admins about brigading.
|
| Their rules barely even mention brigading:
| https://www.reddit.com/r/subredditdrama/wiki/rules, and
| you have to go to the expanded version of the rules to
| find even _this_ , which just says not to _vote_ in
| linked threads.
|
| Literally the entire purpose of this sub is to brigade
| and harass other subs. Their politics align with those of
| the admins, though, so it's allowed. It is _blatant_
| bullying at the tacit encouragement of the people running
| the site.
| UncleMeat wrote:
| IIRC, np was the norm for many years and it just didn't
| actually change anything. Oodles of people do get banned
| from SRD for commenting in linked threads. The easiest
| way to see this is when month+ old threads get linked.
| Only the admins can see downvoting patterns.
|
| Is simply linking to other threads on reddit sufficient
| for you to consider something promoting brigading?
| bnralt wrote:
| > Is simply linking to other threads on reddit sufficient
| for you to consider something promoting brigading?
|
| As I mentioned previously, linking to other subs, or even
| simply _talking_ about /r/StarTrek, was enough for admins
| to accuse /r/Star_Trek of brigading. They threatened to
| shut them down unless they stopped members from doing
| that, and so you're not allowed to do it in the sub
| anymore.
|
| Whether you think that linking to other subs is brigading
| or not, it's clear that admins call it brigading when
| they want to shut down subs, yet then let continue on
| much larger subs dedicated to the act as long as the
| admins like the sub.
|
| Edit: For example, here's a highly upvoted SRD post
| talking about the admins threatening /r/Star_Trek if they
| mention /r/StarTrek[1]. They call /r/Star_Trek linking to
| /r/StarTrek posts to complain about them "brigading," in
| the same post that they themselves are linking to a
| /r/Star_Trek post in order to complain about it.
|
| [1] https://www.reddit.com/r/SubredditDrama/comments/tuem
| 1m/the_...
| bombcar wrote:
| The big difference with Reddit is if _posters_ get
| banned, or if the subreddit gets banned.
|
| The "good" subreddits will get posters banned left and
| right, but accounts are cheap and they return.
|
| The "bad" subreddits get banned.
| KarlKode wrote:
| What I got from similar subreddits (e.g.
| /r/bestoflegaladvice) is that you'll get (shaddow)banned
| really fast if you click a link in the subreddit and
| coment on the linked post.
|
| Just mentioning this because I agree with the point you
| make (in general).
| Zak wrote:
| > _Most other subs don 't even allow you to link outside
| the sub, because they've been warned by admins about
| brigading._
|
| I joined reddit in 2005 and have moderated several
| subreddits. The admins have never imposed anything
| resembling that on any subreddit I have moderated. I have
| a suspicion they impose it when they see a large amount
| of brigading behavior.
|
| Perhaps it's not applied in an entirely fair or even
| manner, but I suspect it's only applied when there's an
| actual problem.
| shafoshaf wrote:
| I'm not sure we could tell the difference. As Yishan states,
| the proof of the behavior isn't being made public because of
| the exposure to creating new issues. Without that, you would
| never know.
|
| As for specific platforms, aka Reddit, how can one be sure that
| right wingers on that platform aren't in fact more likely to
| engage in bad behavior that left wingers? It might be because
| they are being targeted, but it could also be that that group
| of people on that platform tend to act more aggressively.
|
| I am NOT saying that I know if Reddit is fair in its
| moderation, I just don't know.
| realgeniushere wrote:
| oneneptune wrote:
| Can you elaborate with an example? I'm unfamiliar with reddit
| and it's content management. I'm unsure if the premise of "AI"
| moderation is true, how it could moderate beyond a pattern or
| behavior since it can't reasonably be scanning every post and
| comment for political affiliation?
| urbandw311er wrote:
| https://threadreaderapp.com/thread/1586955288061452289.html
| wasmitnetzen wrote:
| As once famously said by Mark Twain: "I didn't have time to write
| a short Twitter thread, so I wrote a long one instead."
| marban wrote:
| Make that "I didn't have time to write a tweet, so I wrote a
| Twitter thread instead."
| wasmitnetzen wrote:
| I thought about that phrasing, but it was too far from the
| original quote for me. But yes, it also works and is the
| better joke as well.
| gryBrd1987 wrote:
| Twitter is text based. Video games have had text based profanity
| filters for online games for years.
|
| Make it easy for users to define a regex list saved locally. On
| the backend train a model that filters images of gore and
| genitals. Aim users who opt in to that experience at that
| filtered stream.
|
| This problem does not require a long winded thesis.
| crummy wrote:
| why do you think nobody else has tried this / had any success
| with this approach?
| gryBrd1987 wrote:
| Because we focus on abstract problem statements, coded
| appeals to authority (as if ex-Reddit CEO is that special;
| there are a few), rather than concrete engineering?
|
| User demand to control what they see is there. It's why TV
| was successful; don't like what's on History? Check out
| Animal Planet.
|
| Tech CEOs need their genius validated and refuse to concede
| anyone else knows what's best for themselves. What everyone
| else sees is a problem for a smurt CEO to micromanage to
| death, of course.
| thrwaway349213 wrote:
| What yishan is missing is that the point of a council of experts
| isn't to effectively moderate a product. The purpose is to
| deflect blame from the company.
|
| It's also hilarious that he says "you can`t solve it by making
| them anonymous" because a horde of anonymous mods is precisely
| how subreddits are moderated.
| [deleted]
| StanislavPetrov wrote:
| >Why is this? Because it has no value? Because it`s sometimes
| false? Certainly it`s not causing offline harm.
|
| >No, no, and no.
|
| Fundamentally disagree with his take on spam. Not only does spam
| have no value, it has negative value. The content of the spam
| itself is irrelevant when the same message is being pushed out a
| million times and obscuring all other messages. Reducing spam
| through rate-limiting is certainly the easiest and most impactful
| form of moderation.
| linuxftw wrote:
| What a bunch of long-winded babble. Incredulously, he's shilling
| an app at the end of this.
|
| I don't agree that this is an interesting submission, and IMO
| there's no new information here.
| gist wrote:
| > No, you can`t solve it by making them anonymous, because then
| you will be accused of having an unaccountable Star Chamber of
| secret elites (especially if, I dunno, you just took the company
| private too). No, no, they have to be public and "accountable!"
|
| This is bulls... Sorry.
|
| Who cares what you are accused of doing?
|
| Why does it matter if people perceive that there is a star
| chamber. Even that reference. Sure the press cares and will make
| it an issue and tech types will care because well they have to
| make a fuss about everything and anything to remain relevant.
|
| After all what are grand juries? (They are secret). Does the fact
| that people might think they are star chambers matters at all?
|
| You see this is exactly the problem. Nobody wants to take any
| 'heat'. Sometimes you just have to do what you need to do and let
| the chips fall where they fall.
|
| The number of people who might use twitter or might want to use
| twitter that would think anything at all about this issue is
| infinitesimal.
| blantonl wrote:
| This really was an outstanding read and take on Elon, Twitter,
| and what's coming up.
|
| But it literally could not have been posted in a worse medium for
| communicating this message. I felt like I had to pat my head and
| rub my tummy at the same time reading through all this, and to
| share it succinctly with colleagues resulted in me spending a
| good 15 minutes cutting and pasting the content.
|
| I've never understood people posting entire blog type posts
| to.... Twitter.
| threeseed wrote:
| It was incoherent rambling and none of really works for
| Twitter.
|
| Twitter is ultimately at the behest of its advertisers who are
| constantly on a knife edge about whether to bother using it or
| not. We have already seen GM and L'Oreal pull ad spend and many
| more will follow if their moderation policies are not in-line
| with community standards.
|
| If Musk wants to make Twitter unprofitable then sure relax the
| moderation otherwise might want to keep it the same.
| filoleg wrote:
| L'Oreal didn't pull their twitter ad spend[0].
|
| 0. https://www.reuters.com/business/retail-consumer/loral-
| suspe...
| threeseed wrote:
| FT disagrees: https://www.ft.com/content/17281b81-b801-4a8f
| -8065-76d3ffb40...
| 0xbadcafebee wrote:
| It seemed interesting but after the 400th tweet I lost interest
| and went to do something productive
| dariusj18 wrote:
| Does anyone else think it's brilliant that he put advertisements
| inside his own thread?
| drewbeck wrote:
| If it was for some random app or gadget I'd be mad but it's
| literally trying to save humanity so I give it a pass. We need
| to be talking about mitigating and surviving catastrophic
| climate change more, not less.
| LegitShady wrote:
| more like "oh never click on yishan threads ever again, this
| guy wants to put ads in twitter threads, who has time and
| patience for that? not me."
|
| Brilliant? For immediately getting large amounts of readers to
| click away and discrediting himself into the future, sure that
| might be brilliant I guess.
|
| It makes him seem desperate for attention and clueless.
| luuuzeta wrote:
| >brilliant
|
| More like weird and unexpected
| klyrs wrote:
| I found it to be an interesting exercise of "spam is protected
| speech." I mean, I hated it, but it really did drive the point
| home.
| TheCapeGreek wrote:
| I like yishan's content and his climate focus, but this "we
| interrupt your tweet thread for sponsored content" style tangent
| is a bit annoying - not directly for doing it or its content, but
| because I can see other thread writers picking this up and we end
| up the same as Youtube with sponsored sections of content that
| you can't ad block _.
|
| _ FWIW With YT you can block them with Sponsorblock, which works
| with user submitted timestamps of sponsored sections in videos.
| If this tweet technique takes off I'd imagine a similar idea for
| tweets.
| syncmaster913n wrote:
| > but this "we interrupt your tweet thread for sponsored
| content" style tangent is a bit annoying
|
| I found this hilarious. I don't use Twitter and so was unaware
| that these annoying tangents are common on the platform. As a
| result, I thought Yishan was using them to illustrate how it's
| not necessarily the content (his climate initiative) but a
| specific pattern of behavior (saying the 'right' thing at the
| wrong time, in this case) that should be the target of
| moderation.
|
| In real life we say: "it's not what you said, it's just the
| _way_ you said it! " Perhaps the digital equivalent of that
| could be: "it's not what you said, it's just _when_ you said
| it. "
| bombcar wrote:
| And it's funny because if you could "downvote/upvote"
| individual tweets in that tweet storm, his "off topic" tweets
| would be downvoted into oblivion.
|
| I think the _fundamental problem_ with the internet today is
| that _by definition_ almost, ads are unwanted content and
| have to be forced on the user.
| kybernetyk wrote:
| While many YouTube videos provide very interesting content most
| twitter ,,threads" are just inane ramblings by some blue
| checkmark. So for yt videos I go the extra steps to install an
| extension. For twitter though? I just close the tab and never
| return.
|
| How can people who are not totally dopamine deprived zombies
| find twitter and this terrible ,,thread" format acceptable?
| Just write a coherent blog post pls.
| klodolph wrote:
| Dopamine-deprived zombies?
|
| I don't find threads hard to read. There's some extra
| scrolling, but it's still in linear order.
|
| People post on Twitter because it reaches people, obviously.
| tkk23 wrote:
| >, but this "we interrupt your tweet thread for sponsored
| content" style tangent is a bit annoying
|
| It is annoying but it can be seen as part of his argument. How
| can spam be moderated if even trustworthy creators create spam?
|
| According to him, it's not spam because it doesn't fulfill the
| typical patterns of spam, which shows that identifying noise
| does require knowledge of the language.
|
| It could be interesting to turn his argument around. Instead of
| trying to remove all spam, a platform could offer the tools to
| handle all forms of spam and let its users come up with clever
| ways to use those tools.
| Fervicus wrote:
| > Our current climate of political polarization makes it easy to
| think...
|
| Stopped reading there. Reddit I think is one of the biggest
| offenders of purposely cultivating a climate of political
| polarization.
| crummy wrote:
| how come you stopped reading there?
| Fervicus wrote:
| My second statement answers that question. I don't want
| moderation advice from someone who was involved in a platform
| that purposely sets moderation policies to create political
| polarization. A comment by someone below sums it up nicely.
|
| > ...and it's triply not true on yishan's reddit which both
| through administrative measures and moderation culture
| targets any and all communities that do not share the
| favoured new-left politics.
|
| In yishan's defense however, I am not sure if those problems
| with reddit started before or after he left.
| hackerlight wrote:
| > favoured new-left politics.
|
| Citation needed. r/ChapoTrapHouse was banned, and there's
| many large alt-right subreddits in existence right now that
| haven't been banned (like r/tucker_carlson).
| csours wrote:
| Is there a better name than "rational jail" for the following
| phenomenon:
|
| We are having a rational, non-controversial, shared-fact based
| discussion. Suddenly the first party in the conversation goes off
| on a tangent and starts saying values or emotions based
| statements instead of facts. The other party then gets angry and
| or confused. The first party then gets angry and or confused.
|
| The first party did not realize they had broken out of the
| rational jail that the conversation was taking place in; they
| thought they were still being rational. The second party detected
| some idea that did not fit with their rational dataset, and
| detected a jailbreak, and this upset them.
| cansirin wrote:
| trying out.
| UI_at_80x24 wrote:
| I've always thought that slashdot handled comment moderation the
| best. (And even that still had problems.)
|
| In addition to that these tools would help:
|
| (1)Client-side: Being able to block all content from specific
| users and the replies to specific users.
|
| (2)Server-side: If userA always 'up votes' comments from userB
| apply a negative weighting to that upvote (so it only counts as
| 0.01 of a vote). Likewise, with 'group-voting'; if userA, userB,
| and userC always vote identically down-weight those votes. (this
| will slow the 'echo chamber' effect)
|
| (3)Account age/contribution scale: if userZ has been a member of
| the site since it's inception, AND has a majority of their posts
| up-voted, AND contributes regularly, then give their votes a
| higher weighted value.
|
| Of course these wouldn't solve everything, as nothing ever will
| address every scenerio; but I've often thought that these things
| combined with how slashdot allowed you to score between -1 to 5,
| AND let you set the 'post value' to 2+, 3+, or 4+ would help
| eliminate most of the bad actors.
|
| Side note: Bad Actors, and "folks you don't agree with" should
| not be confused with each other.
| sbarre wrote:
| One thing that's easy to forget is that super-complex weighted
| moderation/voting systems can get computationally expensive at
| the scale of something like Twitter or Facebook etc..
|
| Slashdot had a tiny population, relatively speaking, so could
| afford to do all that work.
|
| But when you're processing literally millions of posts a
| minute, it's a different order of magnitude I think.
| qsort wrote:
| > Slashdot had a tiny population...
|
| Tiny, specific, non-generalist population. As soon as that
| changed, /. went down the drain like everything else. I still
| like /.'s moderation system better than most, but the
| specifics of how the system works are a second order concern
| at best.
| bombcar wrote:
| The real problem with ALL moderation systems is Eternal
| September.
|
| Once the group grows faster than some amount, the new
| people never get assimilated into the group, and the group
| dies.
|
| "Nobody goes there, it's too popular."
| haroldp wrote:
| > I've always thought that slashdot handled comment moderation
| the best.
|
| A limited number of daily moderation "points". A short list of
| moderation reasons. Meta-moderation.
| com2kid wrote:
| > Server-side: If userA always 'up votes' comments from userB
| apply a negative weighting to that upvote
|
| This falls down, hard, in expert communities. There are a few
| users who are extremely knowledgeable that are always going to
| get upvoted by long term community members who acknowledge that
| expert's significant contributions.
|
| > Being able to block all content from specific users and the
| replies to specific users.
|
| This is doable now client side, but when /. was big, it had to
| be done server side, which is where I imagine all the limits
| around friend/foes came from!
|
| The problem here is, trolls can create gobs of accounts easily,
| and malevolent users group together to create accounts and
| upvote them, so they have plenty of spare accounts to go
| through.
| joemi wrote:
| I wonder about your (2) idea... If the goal is to reduce the
| effect of bots that vote exactly the same, then ok, sure.
| (Though if it became common, I'm sure vote bots wouldn't have a
| hard time being altered to add a little randomness to their
| voting.) But I'm not sure how much it would help beyond that,
| since if it's not just identifying _exact_ same voting, then
| you're going to need to fine tune some percentage-the-same or
| something like that. And I'm not sure the same fine-tuned
| percentage is going to work well everywhere, or even across
| different threads or subforums on the same site. I also feel
| like (ignoring the site-to-site or subforum-to-subforum
| differences) that it would be tricky to fine tune correctly to
| a point where upvotes still matter. (Admittedly I have nothing
| solid to base this on other than just a gut feeling about it.)
|
| It's an interesting idea, and I wonder what results people get
| when trying it.
| hunglee2 wrote:
| "there will be NO relation between the topic of the content and
| whether you moderate it, because it`s the specific posting
| behavior that`s a problem"
|
| some interesting thoughts from Yishan, a novel way to look at the
| problem.
| mikkergp wrote:
| I thought this point was overstated, twitter certainly has some
| controversial content related rules and while as the CEO of
| Reddit he may have been mostly fighting macro battles, there
| are certainly content related things that both networks censor.
| bink wrote:
| Reddit's content policy has also changed a LOT since he was
| CEO. While the policy back then may not have been as loose as
| "is it illegal?" it was still far looser than what Reddit has
| had to implement to gain advertisers.
| hunglee2 wrote:
| I think the unstated caveat would be 'anything illegal' but
| yes, the point was _overstated_ , though, I think, still
| stands
| greenie_beans wrote:
| that digression into plugging his start-up was gross!
| quadcore wrote:
| I think tiktok is doing incredibly well in this regards and in
| almost every social network aspect. Call me crazy but I now
| prefer the discussions there as HN's most of the time. I find
| high-quality comments (and there is still good jokes in the
| middle). The other day I felt upon a video about physics which
| had the most incredibly deep and knowlegeable comments Ive ever
| seen ( _edit: found the video, it is not as good as I remembered
| but still close to HN level imo_ ). It's jaw dropping how well it
| works.
|
| There is classical content moderation (the platform follows local
| laws) but mostly it kind of understand you so well that it put
| you right in the middle of like minded people. At least it feels
| that way.
|
| I dont have insider hinsights on how it trully works I can only
| guess but the algorithm feels like a league or two above
| everything I have seen so far. It feels like it understand people
| so well that it prompted deep thought experiments on my end. Like
| let say I want to know someone I could simple ask "show me your
| tiktok". It's just a thought experiments but it feels like tiktok
| could tell how good of a person you are or more precisely what is
| your level of personal development. Namely, it could tell if
| youre racist, it could tell if youre a bully, a manipulator or
| easily manipulated, it could tell if youre smart (in the sense of
| high IQ), if you have fine taste, if you are a leader or a
| loner... And on and on.
|
| Anyway, this is the ultimate moderation: follow the law and
| direct the user to like minded people.
| ProjectArcturis wrote:
| >mostly it kind of understand you so well that it put you right
| in the middle of like minded people
|
| Doesn't this build echo chambers where beliefs get more and
| more extreme? Good from a business perspective (nobody gets
| pissed off and leaves because they don't see much that they
| object to). But perhaps bad from a maintaining-democracy
| perspective?
| [deleted]
| motohagiography wrote:
| I've had to give this some thought for other reasons, and after a
| couple decades solving analogous problems to moderation in
| security, I agree with yishan about signal to noise over the
| specific content, but what I have effectively spent a career
| studying and detecting with data is a single factor: malice.
|
| It's something every person is capable of, and it takes a lot of
| exercise and practice with higher values to reach for something
| else when your expectations are challenged, and often it's an
| active choice to recognize the urge and act differently. If there
| were a rule or razor I would make on a forum or platform, it's
| that all content has to pass the bar of being without malice.
| It's not "assume good intent," it's recognizing that there are
| ways of having very difficult opinions without malice, and one
| can have conventional views that are malicious, and
| unconventional ones that are not. If you have ever dealt with a
| prosecutor or been on the wrong side of a legal dispute, these
| are people fundamentally actuated by malice, and the similar
| prosecution of ideas and opinions (and ultimately people) is what
| wrecks a forum.
|
| It's not about being polite or civil, avoiding conflict, or even
| avoiding mockery and some very funny and unexpected smackdowns
| either. It's a quality that in being universally capable of it, I
| think we're also able to know it when we see it. "Hate," is a
| weak substitute because it is so vague we can apply it to
| anything, but malice is ancient and essential. Of course someone
| malicious can just redefine malice the way they have done other
| things and use it as an accusation because words have no meaning
| other than as a means in struggle, but really, you can see when
| someone is actuated by it.
|
| I think there is a point where a person decides, consciously or
| not, that they will relate to the world around them with malice,
| and the first casulty of that is an alignment to honesty and
| truth. What makes it useful is that you can address malice
| directly and restore an equillibrium in the discourse, whereas
| accusations of hate and others are irrevocable judgments. I'd
| wonder if given it's applicability, this may be the tool.
| creeble wrote:
| I think it's two things: the power of malice, and popularity
| measurement. Malice and fame.
|
| Social networks are devices for measuring popularity; if you
| took the up/down arrows off, no one would be interested in
| playing. And we have proven once again that nothing gets up
| arrows like being mean.
|
| HN has the unusual property that you can't (readily) see
| others' score, just your own. That doesn't really make it any
| less about fame, but maybe it helps.
|
| When advertising can fund these devices to scale to billions,
| it's tough to be optimistic about how it reflects human nature.
| dang wrote:
| I might be misunderstanding what you mean by malice, but in
| that case it's probably not the best word for what you're
| describing. I'd be interested in a different description if you
| want to write one. I definitely don't think that malice is
| something you can just-see and make accurate judgments about,
| let alone detect with data.
|
| For me, malice relates to intent. Intent isn't observable. When
| person X makes a claim about Y's intent, they're almost always
| filling in invisible gaps using their imagination. You can't
| moderate on that basis. We have to go by effects, not intent (h
| ttps://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...)
| .
|
| It took me a long time to (partially) learn that if I tell a
| user "you were being $foo" where $foo relates to their intent,
| then (1) they can simply deny it and no one can prove
| otherwise, making the moderation position a weak one; and (2)
| mostly they will deny it sincerely because they never had such
| an intent, not consciously at least. Now you've given them a
| reason to feel entirely in the right, and if you moderate them
| anyway, they will feel treated unjustly. This is a way to
| generate bad blood, make enemies, and lose the high ground.
|
| The reverse strategy is much better: describe the _effects_ of
| someone 's posts and explain why they are bad. When inevitably
| they respond with "but my intent was ABC", the answer is "I
| believe you [what else can you say about something only that
| person could know?], but nonetheless the effects were XYZ and
| we have to moderate based on effects. Intent doesn't
| communicate itself--the burden is on the commenter to
| disambiguate it." (https://hn.algolia.com/?dateRange=all&page=0
| &prefix=true&que...)
|
| Often when people get moderated in this way, they respond by
| writing the comment they originally had in their head, as a
| sort of defense of what they actually meant. It's astonishing
| what a gap there often is between the two. Then you can respond
| that if they had posted that in the first place, it would have
| been fine, and that while they know what they have in their
| head when posting, the rest of us have no access to that--it
| needs to be spelled out explicitly.
|
| Being able to tell someone "if you had posted that in the first
| place, it would have been fine" is an _extremely_ strong
| moderation position, because it takes off the table the idea
| "you're only moderating me because you dislike my opinions",
| which is otherwise practically ubiquitous.
| japhyr wrote:
| Kate Manne, in Down Girl, writes about the problems with
| using intent as the basis for measuring misogyny. Intent is
| almost always internal; if we focus on something internal, we
| can rarely positively identify it. The only real way to
| identify it is capturing external expressions of intent:
| manifestos, public statements, postings, and sometimes things
| that were said to others.
|
| If you instead focus on external effects, you can start to
| enforce policies. It doesn't matter about a person's intent
| if their words and actions disproportionately impact women.
| The same goes for many -isms and prejudice-based issues.
|
| A moderator who understands this will almost certainly be
| more effective than one who gets mired in back-and-forths
| about intent.
|
| https://bookshop.org/p/books/down-girl-the-logic-of-
| misogyny...
| motohagiography wrote:
| This aspect of people writing what they meant again after
| being challenged and it being different - I'd assert that
| when there _is_ malice (or another intent) present, they
| double down or use other tactics toward a specific end other
| than improving the forum or relationship they are
| contributing to. When there is none, you get that different
| or broader answer, which is really often worth it. However,
| yes it is intent, as you identify.
|
| I have heard the view that intent is not observable, and I
| agree with the link examples that the effect of a comment is
| the best available heuristic. It is also consistent with a
| lot of other necessary and altruistic principles to say it's
| not knowable. On detecting malice from data, however, the
| security business is predicated on detecting intent from
| network data, so while it's not perfect, there are precedents
| for (more-) structured data.
|
| I might refine it to say that intent is not _passively_
| observable in a reliable way, as if you interrogate the
| source, we get revealed intent. On the intent taking place in
| the imagination of the observer, that 's a deep question.
|
| I think I have reasonably been called out on some of my views
| being the artifacts of the logic of underlying ideas that may
| not have been apparent to me. I've also challenged authors
| with the same criticism, where I think there are ideas that
| are sincere, and ones that are artifacts of exogenous intent
| and the logic of other ideas, and that there is a way of
| telling the difference by interrogating the idea (via the
| person.)
|
| I even agree with the principle of not assuming malice, but
| professionally, my job has been to assess it from indirect
| structured data (a hawkish, is this an attack?) - whereas I
| interpret the moderator role as assessing intent directly by
| its effects, but from unstructured data (is this
| comment/person causing harm?).
|
| Malice is the example I used because I think it has persisted
| in roughly its same meaning since the earliest writing, and
| if that subset of effectively 'evil' intent only existed in
| the imaginations of its observers, there's a continuity of
| imagination and false consciousness about their relationship
| to the world that would be pretty radical. I think it's right
| to not assume malice, but fatal to deny it.
|
| Perhaps there is a more concrete path to take than my
| conflating it with the problem of evil, even if on these
| discussions of global platform rules, it seems like a useful
| source of prior art?
| bombcar wrote:
| I would attribute to malice things like active attempts to
| destroy the very forum - spamming is a form of "malice of the
| commons".
|
| You will know when you encounter malice because nothing will
| de-malice the poster.
|
| But if it is not malice; you can even take what they said and
| _rewrite_ it for them in a way that would pass muster. In
| debate this is called steelmanning - and it 's a very
| powerful persuasion method.
| Zak wrote:
| Spamming is an attempt to promote something. Destroying the
| forum is a side effect.
|
| It's fair to describe indifference to negative effects of
| one's behavior as malicious, and it is, indeed almost never
| possible to transform a spammer into a productive member of
| a community.
| bombcar wrote:
| Yeah, most people take the promotion spamming as the main
| one, but you can also refer to some forms of shitposting
| as spamming (join any twitch chat and watch whatever the
| current spam emoji is flood by) - but the second is more
| almost a form of cheering perhaps.
|
| If you wanted to divide it further I guess you could
| discuss "in-group spamming" and "out-group spamming"
| where almost all of the promotional stuff falls in the
| second but there are still some in the first group.
| Zak wrote:
| I guess I'd describe repeatedly posting the same emoji to
| a chat as flooding rather than spamming. Even then, your
| mention of _cheering_ further divides it into two
| categories of behavior:
|
| 1. Cheering. That's as good a description as any. This is
| intended to express excitement or approval and rally the
| in-group. It temporarily makes the chat useless for
| anything else, but that isn't its purpose.
|
| 2. Flooding. This is an intentional denial of service
| attack intended to make the chat useless for as long as
| possible, or until some demand made by the attacker is
| met.
| kashyapc wrote:
| Hi, dang! I wonder if it makes sense to add a summarized
| version of your critical point on "effects, not intent" to
| the HN guidelines. Though, I fear there might be undesirable
| _ill effects_ of spelling it out that way.
|
| Thanks (an understatement!) for this enlightening
| explanation.
| dang wrote:
| There are so many heuristics like that, and I fear making
| the guidelines so long that no one will read them.
|
| I want to compound a bunch of those explanations into a
| sort of concordance or whatever the right bibliographic
| word is for explaining and adding to what's written else
| where (so, not concordance!)
| kashyapc wrote:
| Fair enough. Yeah your plans of "compounding" and
| "bibliographic concordance" (thanks for the new word)
| sound good.
|
| I was going to suggest this (but scratch it, your above
| idea is better): A small section called "a note on
| moderation" (or whatever) with hyperlinks to "some
| examples that give a concrete sense of how moderation
| happens here". There are many _excellent_ explanations
| buried deep in the the search links that you post here.
| Many of them are a valuable riffing on [internet] human
| nature.
|
| As a quick example, I love your lively analogy[1] of a
| "boxer showing up at a dance/concert/lecture" for
| resisting flammable language here. It's funny and a
| cutting example that is impossible to misunderstand. It
| (and your other comment[2] from the same thread) makes so
| many valuable _reminders_ (it 's easy to forget!). An
| incomplete list for others reading:
|
| - how to avoid the "scorched earth" fate here;
|
| - how "raw self-interest is fine" (if it gets you to
| curiosity);
|
| - why you can't "flamebait others into curiosity";
|
| - why the "medium" [of the "optionally anonymous internet
| forum"] matters;
|
| - why it's not practical to replicate the psychology of
| "small, cohesive groups" here;
|
| - how the "burden is on the commenter";
|
| - "expected value of a comment" on HN; and much more
|
| It's a real shame that these useful heuristics are buried
| so deep in the comment history. Sure, you do link to them
| via searches whenever you can; that's how I discovered
| 'em. But it's hard to stumble upon otherwise. Making a
| sampling of these easily accessible can be valuable.
|
| [1] 3rd paragraph here:
| https://news.ycombinator.com/item?id=27166919
|
| [2] https://news.ycombinator.com/item?id=27162386
| bombcar wrote:
| Commentary or gloss on the text, I believe, is sometimes
| used.
| AnimalMuppet wrote:
| But the difference between the original post and the revised
| post often _is_ malice (or so I suspect). The ideas are the
| same, though they may be developed a bit more in the second
| post. The difference is the anger /hostility/bitterness
| coloring the first post, that got filtered out to make the
| second post.
|
| I think that maybe the observable "bad effects" and the
| unobservable "malice" may be almost exactly the same thing.
| Eliezer wrote:
| This exchange ought to be a post in its own right. It seems
| to me that malice, hate, Warp contamination, whatever you
| want to call it, _is_ very much a large part of the modern
| problem; and also it 's a true and deep statement that you
| should moderate based on effects and not tell anyone what
| their inner intentions were, because you aren't sure of those
| and most users won't know them either.
| nostrebored wrote:
| I find most forums that advocate against behavior they view as
| malicious wind up becoming hugboxes as people skirt this
| arbitrary boundary. I will never, never come back to platforms
| or groups after I get this feeling.
|
| Hugbox environments wind up having a loose relationship with
| the truth and a strong emphasis on emotional well-being.
|
| Setting your moderation boundaries determines the values of
| your platform. I'd much rather talk to someone who wants to
| hurt my feelings than someone who is detached from reality or
| saying what they think.
| danans wrote:
| > "Hate," is a weak substitute because it is so vague we can
| apply it to anything
|
| That is a big stretch. Hate can't be applied to many things,
| including disagreements like this comment.
|
| But it can be pretty clearly applied to statements that, if
| carried out in life, would deny another person or peoples'
| human rights. Another is denigration or mocking someone on the
| basis of things that can't or shouldn't have to change about
| themselves, like their race or religion. There is a pretty
| bright line there.
|
| Malice (per the conventional meaning of something bad intended,
| but not necessarily revealed or acted out) is a much lower bar
| that includes outright hate speech.
|
| > but really, you can see when someone is actuated by it.
|
| How can you identify this systematically (vs it being just your
| opinion), but not identify hate speech?
| Manuel_D wrote:
| Hate absolutely can, and is, applied to disagreements: Plenty
| of people consider disagreement around allowing natal males
| in women's sport is hateful. Plenty of people consider
| opposition to the abolishment of police is hateful. Plenty of
| people immigration enforcement hateful. I could go on...
| bombcar wrote:
| Any disagreement is classified as _hate_ now; the word is
| empty and worthless.
|
| We cannot regulate the internal forum, only the actions we
| perceive.
| danans wrote:
| > Plenty of people consider disagreement around allowing
| natal males in women's sport is hateful. Plenty of people
| consider opposition to the abolishment of police is
| hateful. Plenty of people immigration enforcement hateful.
|
| Those things aren't deemed hate speech, but they might be
| disagreed with and downvoted on some forums (i.e. HN), and
| championed on others (i.e. Parler) but that has nothing to
| do with them being hate speech. They are just unpopular
| opinions in some places, and I can understand how it might
| bother you if those are your beliefs and you get downvoted.
|
| Actual hate speech based on your examples is: promoting
| violence/harassment against non-cisgender people, promoting
| violence/harassment by police, and promoting
| violence/harassment by immigration authorities against
| migrants.
|
| Promoting violence and harassment is a fundamentally
| different type of speech than disagreeing with the
| prevailing local opinion on a controversial subject that
| has many shades of gray (that your examples intentionally
| lack).
| antod wrote:
| For some reason, this makes me wonder how Slashdot's moderation
| would work in the current age. Too nerdy? Would it get
| overwhelmed by today's shit posters?
| bombcar wrote:
| People don't care enough about the "community" anymore. It
| might work on a smallish-scale but the reality is everything is
| shitposting, even here.
|
| Even in Slashdot's heyday the number of metamoderators was
| vanishingly small. The best thing it had was the ability to
| filter anonymous cowards and the ability to browse from -5 to
| +5 if you wanted to.
| nkotov wrote:
| Is anyone else having a hard time following along? Can someone
| provide a tl;dr?
| Consultant32452 wrote:
| The public thinks about moderation in terms of content. Large
| social networks think in terms of behavior. Like let's say I
| get a chip on my shoulder about... the Ukraine war, one
| direction or another. And I start finding a way to insert my
| opinion on every thread. My opinion on the Ukraine war is fine.
| Any individual post I might make is fine and contextual to the
| convo. But I'm bringing down the over-all quality of discussion
| by basically spamming every convo with my personal grievance.
|
| Some kinds of content also gets banned, like child abuse
| material and other obvious things. But the hard part is the
| "behavior" type bans.
| oceanplexian wrote:
| > Any individual post I might make is fine and contextual to
| the convo. But I'm bringing down the over-all quality of
| discussion by basically spamming every convo with my personal
| grievance.
|
| Isn't this how a healthy society functions?
|
| Political protests are "spam" under your definition. When
| people are having a protest in the street, it's inconvenient,
| people don't consent to it, it brings down the quality of the
| discussion (Protestors are rarely out to have a nuanced
| conversation). Social Media is the public square in the 21st
| century, and people in the public square should have a right
| to protest.
| jchw wrote:
| The commentary is interesting, but it does unfortunately gloss
| over the very real issue of actually controversial topics. Most
| platforms don't typically set out to ban controversial stuff from
| what I can tell, but the forces that be (advertisers, government
| regulators, payment processors, service providers, etc.) tend to
| be quite a bit more invested in such topics. Naughty language on
| YouTube and porn on Twitter are some decent examples; these are
| _not_ and never have been signal to noise ratio problems. While
| the media may be primarily interested in the problem of content
| moderation as it impacts political speech, I 'd literally filter
| all vaguely politically charged speech (even at the cost of
| missing plenty of stuff I'd rather see) if given the option.
|
| I think that the viewpoints re: moderation are very accurate and
| insightful, but I honestly have always felt that it's been more
| of a red herring for the actual scary censorship creep happening
| in the background. Go find the forum threads and IRC logs you
| have from the 2000s and think about them for a little while. I
| think that there are many ways in which I'd happily admit the
| internet has improved, but looking back, I think that a lot of
| what was discussed and how it was discussed would not be
| tolerated on many of the most popular avenues for discourse today
| --even though there's really nothing particularly egregious about
| them.
|
| I think this is the PoV that one has as a platform owner, but
| unfortunately it's not the part that I think is interesting. The
| really interesting parts are always off on the fringes.
| wwweston wrote:
| It's hard for me to imagine what "scary actual censorship" is
| happening -- that is, to identify topics or perspectives that
| cannot be represented in net forums. If such
| topics/perspectives exist, then the effectiveness must be near
| total to the point where I'm entirely unaware of them, which I
| guess would be scary if people could provide examples. But
| usually when I ask, I'm supplied with topics which I have
| indeed seen discussed on Twitter, Reddit, and often even HN,
| so...
| cvwright wrote:
| Nobody wants to answer this, because to mention a
| controversial topic is to risk being accused of supporting
| it.
|
| You could look at what famous people have gotten into trouble
| over. Alex Jones or Kanye West. I assume there have been
| others, but those two were in the news recently.
| jchw wrote:
| The problem is that it's not really about censorship the way
| that people think about it; it's not about blanket banning
| the discussion of a topic. You can clearly have a discussion
| about extremely heated debate topics like abortion,
| pedophilia, genocide, whatever. However, in some of these
| topics there are pretty harsh chilling effects that prevent
| people from having very open and honest discussion about
| them. The reason why I'm being non-specific is twofold: one
| is because I am also impacted by these chilling effects, and
| another is because making it specific makes it seem like it's
| about a singular topic when it is about a recurring pattern
| of behaviors that shift topics over time.
|
| If you really don't think there have been chilling effects, I
| put forth two potential theories: one is that you possibly
| see this as normal "consequences for actions" (I do not
| believe this: I am strictly discussing ideas and opinions
| that are controversial even in a vacuum.) OR: perhaps you
| genuinely haven't really seen the fringes very much, and
| doubt their existence. I don't really want to get into it,
| because it would force me to pick specific examples that
| would inextricably paint me into those arguments, but I guess
| maybe it's worth it if it makes the point.
| wwweston wrote:
| > The problem is that it's not really about censorship the
| way that people think about it; it's not about blanket
| banning the discussion of a topic.
|
| Then we're far away enough from the topic of censorship
| that we should be using different language for what we're
| discussing. It's bad enough that people use the term
| "censorship" colloquially when discussing private refusal
| to carry content vs state criminalization. It's definitely
| not applicable by the time we get to soft stakes.
|
| As someone whose life & social circle is deeply embedded in
| a religious institution which makes some claims and
| teachings I find objectionable, I'm pretty familiar with
| chilling effects and other ways in which social stakes are
| held hostage over what topics can be addressed and how. And
| yet I've found these things:
|
| (1) It's taught me a lot about civil disagreement and
| debate, including the fact that more often than not, there
| are ways to address _even literally sacred topics_ without
| losing the stakes. It takes work and wit, but it 's
| possible. Those lessons have been borne out later when I've
| chosen to do things like try to illuminate merits in pro-
| life positions while in overwhelmingly pro-choice forums.
|
| (2) It's made me appreciate the value of what the courts
| have called time/place/manner restrictions. Not every venue
| is or should be treated the same. Church services are the
| last time/place to object to church positions, and when one
| _does_ choose that it 's best to take on the most
| obligation in terms of manner, making your case in the
| terms of the language, metaphors, and values of the church.
|
| (3) Sometimes you have to risk the stakes, and the only
| world in which it would actually be possible for there NOT
| to be such stakes would be one in which people have no
| values at all
| ramblerman wrote:
| Did he begin answering the question, drop some big philosophical
| terms, and then just drift off into here is what I think we
| should do about climate change in 4 steps...?
| cwkoss wrote:
| I find it surprising and a bit disappointing that so many HN
| readers find the manic meandering of yishans thread persuasive
| jefftk wrote:
| He goes back to the main topic after a few tweets on his
| current climate work. It's actually a super long thread:
| https://threadreaderapp.com/thread/1586955288061452289.html
|
| (But I agree this is weird)
| permo-w wrote:
| he does come back to the point after his little side-piece
| about trees, but after a while I didn't feel he was actually
| providing any valuable information, so I stopped reading
| dahfizz wrote:
| Its a Twitter thread. Emotional, incoherent ramblings is the
| standard.
| snowwrestler wrote:
| Yes, he is leveraging his audience. This like going to a
| conference with a big-name keynote, but the lunch sponsor gets
| up and speaks for 5 minutes first.
|
| We're on the thread to read about content moderation. But since
| we're there, he's going to inject a few promos about what he is
| working on now. Just like other ads, I skimmed past them until
| he got back on track with the main topic.
| teddyh wrote:
| So what I'm hearing is that ads are moderated spam. Yeah, I can
| see that.
| matai_kolila wrote:
| Yeah well, Yishan failed miserably at topic moderation on Reddit,
| and generally speaking Reddit has notoriously awful moderation
| policies that end up allowing users to run their own little
| fiefdoms just because they name-squatted earliest on a given
| topic. Additionally, Reddit (also notoriously) allowed some
| horrendously toxic behavior to continue on its site (jailbait,
| fatpeoplehate, the_donald, conservative currently) for literal
| years before taking action, so even when it comes to basic admin
| activity I doubt he's the guy we should all be listening to.
|
| I think the fact that this is absurdly long and wanders at least
| twice into environmental stuff (which _is_ super interesting btw,
| definitely read those tangents) kind of illustrates just how not-
| the-best Yishan is as a source of wisdom on this topic.
|
| _Very_ steeped in typical SV "this problem is super hard so
| you're not allowed to judge failure or try anything simple" talk.
| Also it's basically an ad for Block Party by the end (if you make
| it that far), so... yeah.
| ranger207 wrote:
| Yeah, it's interesting how much reddit's content moderation at
| a site-wide level is basically the opposite of what he said in
| this thread. Yeah, good content moderation should be about
| policing behavior... so why weren't notorious brigading subs
| banned?
| pixl97 wrote:
| Do you have any arguments addressing what he actually said, or
| is this just a reverse argument to authority?
| matai_kolila wrote:
| Mostly just a reverse argument to authority, which isn't the
| fallacy an argument to authority is, AFAIK.
| [deleted]
| fazfq wrote:
| When people ask you why you hate twitter threads, show them this
| hodgepodge of short sentences with sandwiched CO2 removal
| advertisements.
| kodon wrote:
| Too bad he didn't post this on Reddit
| luuuzeta wrote:
| >this hodgepodge of short sentences with sandwiched CO2 removal
| advertisements
|
| I had to stop after the tree myths. Was it related to content
| moderation at all?
| halfmatthalfcat wrote:
| No it wasn't, pure shilling
| RockyMcNuts wrote:
| see also -
|
| Hey Elon: Let Me Help You Speed Run The Content Moderation
| Learning Curve
|
| https://www.techdirt.com/2022/11/02/hey-elon-let-me-help-you...
| hourago wrote:
| > Our current climate of political polarization makes it easy to
| think it`s about the content of the speech, or hate speech, or
| misinformation, or censorship, or etc etc.
|
| Are we sure that it is not the other way around? Didn't social
| platforms created or increased polarization?
|
| I always see this comments from social platforms that take as
| fact that society is polarized and they work hard to fix it, when
| I believe that it is the other way around. Social media has
| created the opportunity to increase polarization and they are not
| able to stop it for technical, social or economic reasons.
| throw0101a wrote:
| > _Are we sure that it is not the other way around? Didn 't
| social platforms created or increased polarization?_
|
| The process of polarization (in the US) started decades ago:
|
| * https://en.wikipedia.org/wiki/Why_We%27re_Polarized
|
| In fact it seems that people were always polarized, it's just
| that the political parties (R & D in the US) didn't really
| bother sorting themselves on topics until the 1960s: even in
| the 1970s and early 1980s it was somewhat common to vote for
| (e.g.) an R president but a D representative (or vice versa).
| Straight-through one-party voting didn't really become the
| majority until the late-1980s and 1990s.
|
| There's a chapter or two in the above book describing
| psychology studies showing that humans form tribes
| 'spontaneously' for the most arbitrary of reasons. "Us versus
| them" seems to be baked into the structure of humans.
| somenameforme wrote:
| It's quite interesting that the USSR collapsed in 1991, which
| removed the biggest external "us vs them" actor.
|
| But on the other hand there are also countless other factors
| that are going to affect society at scale: rise internet,
| rise of pharmaceutical psychotropics, surge in obesity, surge
| in autism, declines in testosterone, apparent reversal of
| Flynn effect, and more.
|
| With so many things happening it all feels like a Rorschach
| test when trying to piece together anything like a meaningful
| hypothesis.
| raxxorraxor wrote:
| I think political parties only later began astroturfing on
| social media and split users in camps. Formerly content on
| reddit in default subreddits often had low quality, but you
| still got some nice topics here and there. Now it is a
| propaganda hellhole that is completely in the hands of pretty
| polarized users.
|
| > "Us versus them" seems to be baked into the structure of
| humans.
|
| Not quite, but one of the most effective temptations one can
| offer is giving people a moral excuse to hate others. Best
| when see as those as responsible for all evil in the world.
| It feels good to judge, it distracts from your own faults,
| flaws, insecurities, fears and problems. This is pretty
| blatant and has become far, far worse than the formerly
| perhaps populist content on reddit. We especially see this on
| political topics, but also the pandemic as an example.
| throw0101a wrote:
| > _I think political parties only later began astroturfing
| on social media and split users in camps._
|
| The splitting into camps (in the US) based on particular
| topics started much earlier than social media:
|
| * https://en.wikipedia.org/wiki/Southern_strategy
|
| > _Not quite, but one of the most effective temptations one
| can offer is giving people a moral excuse to hate others._
|
| The psychology studies referenced in the book show us-
| versus-them / in/out-group mentality without getting in
| moral questions or political topics.
| r721 wrote:
| Scott Alexander's review is worth reading to get a summary of
| this book: https://astralcodexten.substack.com/p/book-review-
| why-were-p...
| cafard wrote:
| I think that you should look into the history of talk radio, or
| maybe just radio in general. Then maybe a history of American
| journalism, from Robert McCormick's Chicago Tribune back to the
| the party newspapers set up in the first years of the republic.
| Spivak wrote:
| Yepp, same message different medium. Having someone in your
| family who "listens to talk radio" was the "they went down
| the far right YouTube rabbit hole" of old.
|
| I mean the big names in talk radio are still clucking if you
| want to listen to them today.
| nradov wrote:
| Real society is not that polarized. If you talk to real people
| they mostly have moderate political opinions. But they don't
| tweet about politics.
|
| Twitter is not a real place.
| toqy wrote:
| I used to think this until several instances of various
| neighbors getting drunk enough to shed the veil of souther
| hospitality and reveal how racist they are.
|
| Plenty of people have radical thoughts and opinions, but are
| smart enough to keep it to themselves IRL
| Spivak wrote:
| But unfortunately real people are influenced and impacted by
| the fiction.
|
| If far right political bullshit would stay online and out of
| my state's general assembly that would be such a positive
| change.
| count wrote:
| Society is a closed system, twitter is not outside of
| society.
|
| The people on twitter are real people (well, mostly,
| probably), and have real political opinions.
|
| If you talk to people, by and large they'll profess moderate
| opinions, because _in person discussions still trigger
| politeness and non-confrontational emotions_ in most people,
| so the default 'safe' thing to say is the moderate choice,
| no matter what their true opinion happens to be.
|
| The internet allows people to take the proverbial mask off.
| SXX wrote:
| I would disagree about proverbial masks. Majority of people
| in the world including US are simply too preoccupied with
| their everyday routine, problems and work to end up with
| extreme political views.
|
| What Internet does have is ease of changing masks and
| joining diverse groups. Trying something unusual without
| reprecussions appeal to a lot of people who usually simply
| dont have time to join such groups offline.
|
| The real problem is that unfortunately propoganda has
| evolved too with all new research about human phychology,
| behaviors and fallacies. Abusing weaknesses of monkey brain
| on scale is relatively easy and profitable.
| nradov wrote:
| Nah. Even those few accounts on Twitter that are actually
| run by real people (not bots) are mostly trolling to some
| extent. It's all a big joke.
| count wrote:
| I thought that as well, until about Nov 2016...
| bombcar wrote:
| That was the biggest joke of all!
|
| So far ...
| r721 wrote:
| Yeah, there were some good articles about this:
|
| >The Making of a YouTube Radical
|
| >I visited Mr. Cain in West Virginia after seeing his YouTube
| video denouncing the far right. We spent hours discussing his
| radicalization. To back up his recollections, he downloaded and
| sent me his entire YouTube history, a log of more than 12,000
| videos and more than 2,500 search queries dating to 2015.
|
| https://www.nytimes.com/interactive/2019/06/08/technology/yo...
| (2019)
| belorn wrote:
| > Moderating spam is very interesting: it is almost universally
| regarded as okay to ban (i.e. CENSORSHIP) but spam is in no way
| illegal.
|
| Interesting, in my country spam is very much illegal and I would
| hazard a guess that it is also illegal in the US, similar to how
| littering, putting up posters on peoples buildings/cars/walls,
| graffiti (a form of spam), and so on is also illegal. If I
| received the amount of spam I get in email as phone calls I would
| go as far as calling it harassment, and of course robot phone
| calls are also illegal. Unsolicited email spam is also again the
| law.
|
| And if spam is against the service agreement on twitter then that
| could be a computer crime. If the advertisement is fraudulent (as
| is most spam), it is fraud. Countries also have laws about
| advertisement, which most spam are unlikely to honor.
|
| So I would make the claim that there is plenty of principled
| reasons for banning spam, all backed up by laws of the countries
| that the users and the operators live in.
| snowwrestler wrote:
| Nudity and porn are other examples of legal speech that have
| broad acceptance among the public (at least the U.S. public) to
| moderate or ban on social media platforms.
|
| Yishan's point is, most people's opinions on how well a
| platform delivers free speech vs censorship will index more to
| the content of the speech, rather than the pattern of behavior
| around it.
| thesuitonym wrote:
| Unsolicited phone calls are somewhat illegal, but it's
| dependent on circumstances. It's the same with email spam and
| mail spam. One person's spam is another person's cold call.
| Where do you draw the line? Is mailing a flier with coupons
| spam? Technically yes, but some people find value in it.
|
| In the US, spam is protected speech, but as always, no company
| is required to give anybody a platform.
| myself248 wrote:
| > In the US, spam is protected speech
|
| [citation needed]
|
| Doesn't the CAN-SPAM act explicitly declare otherwise?
| toqy wrote:
| I was under the impression that CAN-SPAM applies to email,
| not user generated content on the internet at large
| belorn wrote:
| It is both yes, and no. CAN-SPAM do only apply to
| electronic mail messages, usually shorten down to email.
| However...
|
| In late March, a federal court in California held that
| Facebook postings fit within the definition of
| "commercial electronic mail message" under the
| Controlling the Assault of Non-Solicited Pornography and
| Marketing Act ("CAN-SPAM Act;" 15 U.S.C. SS 7701, et
| seq.). Facebook, Inc. v. MAXBOUNTY, Inc., Case No.
| CV-10-4712-JF (N.D. Cal. March 28, 2011).
|
| There is also two other court cases: MySpace v. The
| Globe.com and MySpace v. Wallace.
|
| In the later, the court concluded that "[t]o interpret
| the Act in the limited manner as advocated by [d]efendant
| would conflict with the express language of the Act and
| would undercut the purpose for which it was passed." Id.
| This Court agrees that the Act should be interpreted
| expansively and in accordance with its broad legislative
| purpose.
|
| The court defined "electronic mail address" as meaning
| nothing more specific than "a destination . . . to which
| an electronic mail message can be sent, and the
| references to local part and domain part and all other
| descriptors set off in the statute by commas represent
| only one possible way in which a destination can be
| expressed.
|
| Basically, in order to follow the spirit of the law the
| definition of "email" expanded, with traditional email
| like user@example.invalid being just one example of many
| forms of "email".
| null0ranje wrote:
| > In the US, spam is protected speech, but as always, no
| company is required to give anybody a platform.
|
| Commercial speech in the US is not protected speech and may
| be subject to a host of government regulation [0]. The
| government has broad powers to regulate the time, place, and
| content of commercial speech in ways that it does not for
| ordinary speech.
|
| [0] See https://www.law.cornell.edu/wex/commercial_speech
| gorbachev wrote:
| Not all spam is commercial.
|
| In fact, US legislators specifically made political spam
| legal in the CAN-SPAM bill.
| belorn wrote:
| It is dependent on circumstances, and the people who would
| draw the line in the end would be the government followed by
| the court.
|
| Not all speech is protected speech. Graffiti is speech, and
| the words being spoken could be argued as protected, but the
| act of spraying other people properties with it is not
| protected. Free speech rights does not overwrite other
| rights. As a defense in a court I would not bet my money on
| free speech in order to get away with crimes that happens to
| involves speech.
|
| Historically the US court has defined speech into multiple
| different categories. One of those are called fraudulent
| speech which is not protected by free speech rights. An other
| category is illustrated with the anti-spam law in Washington
| State, which was found to not be in violation of First
| Amendment rights because it prevent misleading emails.
| Washington's statue regulate deceptive _commercial speech_
| and thus passed the constitutional test. An other court
| ruling, this one in Maryland, confirmed that commercial
| speech was less protected than other forms of speech and that
| commercial speech had no protection when it was demonstrably
| false.
|
| In theory a spammer could make non-commercial, non-
| misleading, non-fraudulent speech, and a site like twitter
| would then actually have to think about questions like first-
| amendment. I can't say I have ever received or seen spam like
| that.
| buzer wrote:
| > In theory a spammer could make non-commercial, non-
| misleading, non-fraudulent speech, and a site like twitter
| would then actually have to think about questions like
| first-amendment. I can't say I have ever received or seen
| spam like that.
|
| While I don't think I have seen it on Twitter (then again I
| only read it when it's linked) I have seen plenty of it in
| some older forums & IRC. Generally it's just nonsense like
| "jqrfefafasok" or ":DDDDDD" being posted lots of times in
| quick succession, often to either flood out other things,
| to draw attention to poster or to show annoyance about
| something (like being banned previously).
| belorn wrote:
| You got a point. Demonstration as a form of free speech
| is an interesting dilemma. Review spam/bombing for
| example can be non-commercial, non-misleading, non-
| fraudulent, while still being a bit of a grey-zone.
| Removing them is also fairly controversial. Outside the
| web we have a similar problem when demonstrations and
| strikes are causing disruption in society. Obviously
| demonstration and strikes should be legal and are
| protected by free speech, but at the same time there are
| exceptions when they are not.
|
| I am unsure if one would construct a objective fair model
| for how to moderate such activity.
| thesuitonym wrote:
| >a site like twitter would then actually have to think
| about questions like first-amendment.
|
| I wish people understood that the first amendment does not
| have anything to do with social media sites allowing people
| to say anything. Twitter is not a public square, no matter
| how much you want it to be.
| asddubs wrote:
| i love that, this fucking twitter thread has a commercial break
| in the middle of it.
|
| edit: it has multiple commercial breaks!
| mmastrac wrote:
| Unrolled thread: https://mem.ai/p/D0AfFRGYoKkyW5aQQ1En
| top_sigrid wrote:
| Wait what?
|
| If you want a decend unroll, one example would be
| threadreaderapp:
| https://threadreaderapp.com/thread/1586955288061452289.html
| datan3rd wrote:
| I think email might be a good system to model this on. In
| addition to an inbox, almost all providers provide a Spam folder,
| and others like Gmail separate items into 'Promotions' and
| 'Social' folders/labels. I imagine almost nobody objects to this.
|
| Why can't social media follow a similar methodology? There is no
| requirement that FB/Twitter/Insta/etc feeds be a single "unit".
| The primary experience would be a main feed (uncontroversial),
| but additional feeds/labels would be available to view platform-
| labeled content. A "Spam Feed" and a "Controversial Feed" and a
| "This Might Be Misinformation Feed".
|
| Rather than censoring content, it segregates it. Users are free
| to seek/view that content, but must implicitly acknowledge the
| platform's opinion by clicking into that content. Just like you
| know you are looking at "something else" when you go to your
| email Spam folder, you would be aware that you are venturing off
| the beaten path when going to the "Potential State-Sponsored
| Propaganda Feed". There must be some implicit trust in a singular
| feed which is why current removal/censorship schemas cause such
| "passionate" responses.
| wcerfgba wrote:
| I like Yishan's reframing of content moderation as a 'signal-to-
| noise ratio problem' instead of a 'content problem', but there is
| another reframing which follows from that: moderation is also an
| _outsourcing problem_ , in that moderation is about users
| outsourcing the filtering of content to moderators (be they all
| other users through voting mechanisms, a subset of privileged
| users through mod powers, or an algorithm).
|
| Yishan doesn't define what the 'signal' is, or what 'spam' is,
| and there will probably be an element of subjectivity to these
| which varies between each platform and each user on each
| platform. Thus successful moderation happens when moderators know
| what users want, i.e. what the users consider to be 'good
| content' or 'signal'. This reveals a couple of things about why
| moderation is so hard.
|
| First, this means that moderation actually _is_ a content
| problem. For example, posts about political news are regularly
| removed from Hacker News because they are off-topic for the
| community, i.e. we don 't consider that content to be the
| 'signal' that we go to HN for.
|
| Second, moderation can only be successful when there is a shared
| understanding between users and moderators about what 'signal'
| is. It's when this agreement breaks down that moderation becomes
| difficult or fails.
|
| Others have posted about the need to provide users with the tools
| to do their own moderation in a decentralised way. Since the
| 'traditional'/centralised approach creates a fragile power
| dynamic which requires this shared understanding of signal, I
| completely understand and agree with this: as users we should
| have the power to filter out content we don't like to see.
|
| However, we have to distinguish between general and topical
| spaces, and to determine which communities live in a given space
| and what binds different individuals into collectives. Is there a
| need for a collective understanding of what's on-topic? HN is not
| Twitter, it's designed as a space for particular types of people
| to share particular types of content. Replacing 'traditional' or
| centralised moderation with fully decentralised moderation risks
| disrupting the topicality of the space and the communities which
| inhabit it.
|
| I think what we want instead is a 'democratised' moderation, some
| way of moderating that removes a reliance on a 'chosen few', is
| more deliberate about what kinds of moderation need to be
| 'outsourced', and which allows users to participate in a shared
| construction of what they mean by 'signal' or 'on-topic' for
| their community. Perhaps the humble upvote is a good example and
| starting point for this?
|
| Finally in the interest of technocratic solutions, particularly
| around spam (which I would define as repetitive content), has
| anyone thought about rate limits? Like, yeah if each person can
| only post 5 comments/tweets/whatever a day then you put a cap on
| how much total content can be created, and incentivise users to
| produce more meaningful content. But I guess that wouldn't allow
| for all the _sick massive engagement_ that these attention
| economy walled garden platforms need for selling ads...
| [deleted]
| bravura wrote:
| Yishan's points are great, but there is a more general and
| fundamental question to discuss...
|
| Moderation is the act removing content. i.e. of assigning a score
| of 1 or 0 to content.
|
| If we generalize, we can assign a score from 1 to 0 to all
| content. Perhaps this score is personalized. Now we have a user's
| priority feed.
|
| How should Twitter score content using personalization? Filter
| bubble? Expose people to a diversity of opinions? etc. Moderation
| is just a special case of this.
| panarky wrote:
| One size does not fit all.
|
| Some people want to escape the filter bubble, to expose their
| ideas to criticism, to strengthen their thinking and arguments
| through conflict.
|
| Other people want a community of like-minded people to share
| and improve ideas and actions collectively, without trolls
| burning everything down all the time.
|
| Some people want each of those types of community depending on
| the topic and depending on their mood at the time.
|
| A better platform would let each community decide, and make it
| easy to fork off new communities with different rules when a
| subgroup or individual decides the existing rules aren't
| working for them.
| rongopo wrote:
| Imagine there would be many shades of up and down voting in HN,
| according to your earned karma points, and to your interactions
| outside of your regular opinion echo Chambers.
| lawrenceyan wrote:
| You can tell this guy is a genius at marketing.
|
| Smart to comment on his current pursuits in environmental
| terraforming knowing he's going to get eyeballs on any thread he
| writes.
| yamazakiwi wrote:
| I commented on another comment discussing this and they thought
| the opposite. I also thought it was relatively a good idea,
| albeit distracting.
| DelightOne wrote:
| Can there be a moderation bot that detects flamewars and steps
| in? It could enforce civility by limiting discussion to only go
| through the bot and by employing protocols like "each side
| summarize issues", "is this really important here", or "do you
| enjoy this".
|
| Engaging with the bot is supposed to be a rational barrier, a
| tool to put unproductive discussions back on track.
| e40 wrote:
| Easier to read this:
|
| https://threadreaderapp.com/thread/1586955288061452289.html
| wackget wrote:
| Anyone got a TL;DR? I don't feel like trudging through 100
| sentences of verbal diarrhea.
| goatcode wrote:
| > you`ll end up with a council of third-rate minds and
| politically-motivated hacks, and the situation will be worse than
| how you started.
|
| Wow, surprising honesty from someone affiliated with Reddit. I'm
| sad that I wasn't on the site during the time of the old guard.
| commandlinefan wrote:
| > I'm sad that I wasn't on the site during the time of the old
| guard.
|
| It really was great - I probably wouldn't care how horrible
| it's become if not for the fact that I remember how it used to
| be.
| anigbrowl wrote:
| Reposting this paper yet again, to rub in the point that social
| media platforms play host to _communities_ and communities are
| often very good at detecting interlopers and saboteurs and
| pushing them back out. And it turns out the most effective
| approach is to let people give bad actors a hard time. Moderation
| policies that require everyone to adhere to high standards of
| politeness in all circumstances are trying to reproduce the
| dynamics of kindergartens, and are not effective because the
| moderators are easily gamed.
|
| https://arxiv.org/pdf/1803.03697.pdf
|
| Also, if you're running or working for a platform and dealing
| with insurgencies, you will lose if you try to build any kind of
| policy around content analysis. Automated context analysis is
| generally crap because of semantic overloading (irony, satire,
| contextual humor), and manual context analysis is labor-intensive
| and immiserating, to the point that larger platforms like
| Facebook are legitimately accused of abusing their moderation
| staff by paying them peanuts to wade through toxic sludge and
| then dumping them as soon as they complain or ask for any kind of
| support from HR.
|
| To get anywhere you need to look at patterns of behavior and to
| scale you need to do feature/motif detection on dynamic systems
| rather than static relationships like friend/follower selections.
| However, this kind of approach is fundamentally at odds with many
| platforms' goal of maximizing engagement as means to the end of
| selling ad space.
| aerovistae wrote:
| These random detours into climate-related topics are insanely
| disruptive of an otherwise interesting essay. I absolutely hate
| this pattern. I see what he's trying to do - you don't want to
| read about climate change but you want to read this other thing
| so I'm going to mix them together so you can't avoid the one if
| you want the other - but it's an awful dark pattern and makes for
| a frustrating and confusing reading experience. I kept thinking
| he was making an analogy before realizing he was just changing
| topics at random again. It certainly isn't making me more
| interested in his trees project. If anything I'm less interested
| now.
| IncRnd wrote:
| Since the argument was so well-structured, the interim detour
| to climate related topics was odd. The very argument was that
| spam can be detected by posting behaviors, yet the author
| engaged in those for his favored topic.
| incomingpain wrote:
| This CEO did the same thread 6 months ago and was blasted off the
| internet. You can see his thread here:
| https://twitter.com/yishan/status/1514938507407421440
|
| edit/ Guess it is working now?
|
| The most important post in his older thread:
| https://twitter.com/yishan/status/1514939100444311560
|
| He never ever justifies this point. The world absolutely has not
| changed in the context of censorship. Censorship apologetics
| notwithstanding.
|
| The realization is the world changed is a reveal. He as CEO
| learnt about where the censorship is coming from.
| Spivak wrote:
| What's wrong with this thread? It seems really level headed and
| exactly accurate to the people I know IRL who are insane-but-
| left and insane-but-right who won't shut up about censorship
| while if you look at their posts it's just "unhinged person
| picks fights with and yells at strangers."
|
| HN in general is convinced that social media is censoring right
| ideas because it skews counterculture and "grey tribe" and
| there have been a lot of high profile groups who claim right
| views while doing the most vile depraved shit like actively
| trying to harass people into suicide and celebrating it or
| directing massive internet mobs at largely defenseless not
| public figures for clout.
| mikkergp wrote:
| > The world absolutely has not changed in the context of
| censorship.
|
| Citation needed
| incomingpain wrote:
| >Citation needed
|
| As I said in my post, he never justifies this point. To then
| turn it upon me to prove a negative?
|
| Devils advocating against myself: I do believe the parler
| deplatforming is the proof for what he says. The world has
| indeed changed, but anyone who knows the details sure isn't
| saying why. Why? Because revealing how the world has changed,
| in the usa, would have some pretty serious consequences.
|
| I don't know. I wish I could have a closed door, off record,
| tell me everything, conversation with yishan to have him tell
| me why he believes the world changed, in the context of
| social media censorship.
|
| In terms of public verified knowledge, nothing at all has
| changed in the context of censorship. I stand by the point.
| Elon obviously stands by this as well. Though elon's sudden
| multiweek delays on unbanning... im expecting he suddenly
| knows as well.
|
| >You're posting too fast. Please slow down. Thanks.
|
| Guess I'm not allowed to reply again today. No discussion
| allowed on HN.
|
| I do find it funny they say 'you're posting too fast' but I
| haven't been able to post on HN or reply to you for an hour.
| How "fast" am I really going. I expect it will be a couple
| more hours before I am allowed to post again. How dare I
| discuss a forbidden subject.
| lrm242 wrote:
| Huh? What do you mean unavailable? I see it just fine.
| r721 wrote:
| I can confirm - I saw "this tweet is unavailable" message or
| something similarly worded on first click too. Reloading
| fixed that.
| kmeisthax wrote:
| This is a very good way to pitch your afforestation startup
| accelerator in the guise of a talk on platform moderation. /s
|
| I'm pretty sure I've got some bones to pick with yishan from his
| tenure on Reddit, but everything he's said here is pretty
| understandable.
|
| Actually, I would like to develop his point about "censoring
| spam" a bit further. It's often said that the Internet "detects
| censorship as damage and routes around it". This is propaganda,
| of course; a fully censorship-resistant Internet is entirely
| unusable. In fact, the easiest way to censor someone online is
| through harassment, or DDoS attacks - i.e. have a bunch of people
| shout at you until you shut up. Second easiest is through doxing
| - i.e. make the user feel unsafe until they jump off platform and
| stop speaking. Neither of these require content removal
| capability, but they still achieve the goal of censorship.
|
| The point about old media demonizing moderation is something I
| didn't expect, but it makes sense. This _is_ the same old media
| that gave us cable news, after all. Their goal is not to inform,
| but to allure. In fact, I kinda wish we had a platform that
| explicitly refused to give them the time of day, but I 'm pretty
| sure it's illegal to do that now[0], and even back a decade ago
| it would be financial suicide to make a platform only catering to
| individual creators.
|
| [0] For various reasons:
|
| - The EU Copyright Directive imposes an upload filtering
| requirement on video platforms that needs cooperation with old
| media companies in order to implement. The US is also threatening
| similar requirements.
|
| - Canada Bill C-11 makes Canadian content (CanCon) must-carry for
| all Internet platforms, including ones that take user-generated
| content. In practice, it is easier for old media to qualify as
| CanCon than actual Canadian individuals.
| nullc wrote:
| I've often pointed out that the concept of censorship as being
| only or primarily through removal of speech is an antiquated
| concept from a time before pervasive communications networks
| had almost effortlessly connected most of the world.
|
| Censorship in the traditional sense is close to impossible
| online today.
|
| Today censorship is often and most effectively about
| suppressing your ability to be heard, often by flooding out the
| good communications with nonsense, spam, abuse, or discrediting
| it by association (e.g. fill the forums of a political
| opponents with apparent racists). This turns the neigh
| uncensorability of modern communications methods on its head
| and makes it into a censorship tool.
|
| And, ironically, anyone trying to use moderation to curb this
| sort of censorious abuse is easily accused of 'censorship'
| themselves.
|
| I remain convinced that the best tool we have is topicality:
| When a venue has a defined topic you can moderate just to stay
| onto the topic without a lot of debatable value judgements (or
| bruised egos-- no reason to feel too bad about a post being
| moved or removed for being offtopic). Unfortunately, the
| structure of twitter pretty much abandons this critical tool.
|
| (and with reddit increasingly usurping moderation from
| subreddit moderators, it's been diminished there)
|
| Topicality doesn't solve all moderation issues, but once an
| issue has become too acrimonious it will inherently go off-
| topic: e.g. if your topic is some video game well participants
| calling each other nasty names is clearly off-topic. Topicality
| also reduces the incidence of trouble coming in from divisive
| issues that some participants just aren't interested in
| discussing-- If I'm on a forum for a video game I probably
| don't really want to debate abortion with people.
|
| In this thread we see good use of topicality at the top with
| Dang explicitly marking complaints about long form twitter
| offtopic.
|
| When it comes to spam scaling considerations mean that you need
| to be able to deal with much of it without necessarily
| understanding the content. I don't think this should be
| confused with content blindness being desirable in and of
| itself. Abusive/unwelcoming interactions can occur both in the
| form (e.g. someone stalking some around from thread to thread
| or repeating an argument endlessly) and and in the content
| (continually re-litigating divisive/flame-bate issues that no
| one else wants to talk about, vile threatening messages, etc.)
|
| Related to topicality is that some users just don't want to
| interact with each other. We don't have to make a value
| judgement about one vs the other if we can provide space so
| that they don't need to interact. Twitter's structure isn't
| great for this either, but more the nature of near-monopoly
| mega platforms isn't great for it. Worse, twitter actively make
| it hard-- e.g. if you've not followed someone who is network-
| connected to other people you follow twitter continually
| recommends their tweets (as a friend said: "No twitter, there
| is a reason I'm not following them") and because blocking is
| visible using it often creates drama.
|
| There are some subjects on HN where I might otherwise comment
| but I don't because I'd prefer to avoid interacting with a Top
| Poster who will inevitably be active in those subjects.
| Fortunately, there are plenty of other places where I can
| discuss those things where that poster isn't active.
|
| Even a relatively 'small' forum can easily have as many users
| as many US states populations at the founding of the country. I
| don't think that we really need to have mega platforms with
| literally everyone on them and I see a fair amount of harm from
| it (including the effects of monoculture moderation gone
| wrong).
|
| In general, I think the less topic constrained you can make a
| venue the smaller it needs to be-- a completely topic-less
| social venue probably should have no more than a few dozen
| people. Twitter is both mega-topicless and ultra-massive-- an
| explosive mixture which will inevitably disappoint.
|
| Another tool I think many people have missed the value of is
| procedural norms including decorum. I don't believe that using
| polite language actually makes someone polite (in fact, the
| nastiest and most threatening remarks I've ever received were
| made with perfectly polite language)-- but some people are just
| unable to regulate their own behavior. When there is an easily
| followed set of standards for conduct you gain a bright line
| criteria that makes it easier to eject people who are too
| unable to control themselves. Unfortunately, I think the value
| of a otherwise pointless procedural conformity test is often
| lost on people today, though they appear common in historical
| institutions. (Maybe a sign of the ages of the creators of
| these things: As a younger person I certainly grated against
| 'pointless' conformity requirements, as an older person I see
| more ways that their value can pay for their costs: I'd rather
| not waste my time on someone who can't even manage to go
| through the motions to meet the venue's standards)
|
| Early on in Wikipedia I think we got a lot of mileage out of
| this: the nature of the site essentially hands every user a
| loaded gun (ability to edit almost everything, including
| elements on the site UI) and then tells them not to do use it
| abusively rather than trying to technically prevent them from
| using it abusively. Some people can't resist and are quickly
| kicked out without too much drama. Had those same people been
| technically prevented they would have hung around longer and
| created trouble that was harder to kick them out over (and I
| think as the site added more restrictions on new/casual users
| the number of issues from poorly behaved users increased).
| mountainriver wrote:
| I love that he's for flame wars, go figure that's all Reddit is
| saurik wrote:
| > there will be NO relation between the topic of the content and
| whether you moderate it, because it`s the specific posting
| behavior that`s a problem
|
| I get why Yishan wants to believe this, but I also feel like the
| entire premise of this argument is then in some way against a
| straw man version of the problem people are trying to point to
| when they claim moderation is content-aware.
|
| The issue, truly, isn't about what the platform moderates so much
| as the bias between when it bothers to moderate and when it
| doesn't.
|
| If you have a platform that bothers to filter messages that "hate
| on" famous people but doesn't even notice messages that "hate on"
| normal people--even if the reason is just that almost no one sees
| the latter messages and so they don't have much impact and your
| filters don't catch it--you have a (brutal) class bias.
|
| If you have a platform that bothers to filter people who are
| "repetitively" anti large classic tech companies for the evil
| things they do trying to amass money and yet doesn't filter
| people who are "repetitively" anti crypto companies for the evil
| things _they_ do trying to amass money--even if it feels to you
| as the moderator that the person seems to have a point ;P--that
| is another bias.
|
| The problem you see in moderation--and I've spent a LONG time
| both myself being a moderator and working with people who have
| spent their lives being moderators, both for forums and for live
| chat--is that moderation and verification of everything not only
| feels awkward but simply _doesn 't scale_, and so you try to
| build mechanisms to moderate _enough_ that the forum seems to
| have a high _enough_ signal-to-noise ratio that people are happy
| and generally stay.
|
| But the way you get that scale is by automating and triaging: you
| build mechanisms involving keyword filters and AI that attempt to
| find and flag low signal comments, and you rely on reports from
| users to direct later attention. The problem, though, is that
| these mechanisms inherently have biases, and those biases
| absolutely end up being inclusive of biases that are related to
| the content.
|
| Yishan seems to be arguing that perfectly-unbiased moderation
| might seem biased to some people, but he isn't bothering to look
| at where or why moderation often isn't perfect to ensure that
| moderation actually works the way he claims it should, and I'm
| telling you: it never does, because moderation isn't omnipresent
| and cannot be equally applied to all relevant circumstances. He
| pays lip service to it in one place (throwing Facebook under the
| bus near the end of the thread), and yet fails to then realize
| _this is the argument_.
|
| At the end of the day, real world moderation is certainly biased.
| _And maybe that 's OK!_ But we shouldn't pretend it isn't biased
| (as Yishan does here) or even that that bias is always in the
| public interest (as many others do). That bias may, in fact, be
| an important part of moderating... and yet, it can also be
| extremely evil and difficult to discern from "I was busy" or "we
| all make mistakes" as it is often subconscious or with the best
| of intentions.
| karaterobot wrote:
| There were indeed some intelligent, thoughtful, novel insights
| about moderation in that thread. There were also... two
| commercial breaks to discuss his new venture? Eww. While
| discussing how spam is the least controversial type of noise you
| want to filter out? I appreciate the good content, I'm just not
| used to seeing product placement wedged in like that.
| yamazakiwi wrote:
| I thought it was simultaneously annoying and interesting so it
| sort of cancelled itself out.
| zcombynator wrote:
| Spam is unwelcommed for a simple reason: there is no real person
| behind it.
| kodt wrote:
| Not always true. In fact often spam is simply self-promotion by
| the person posting it.
| bombcar wrote:
| In fact that type of spam is more annoying than the BUY @#$@$
| NOW generic bot-spam, as it is way more insidious.
| hackerlight wrote:
| If I was behind the exact same spam, would it be welcomed? Come
| on.
| mcguire wrote:
| Correct me if I'm wrong, but this sounds very much like what dang
| does here.
| dang wrote:
| All: this is an interesting submission--it contains some of the
| most interesting writing about moderation that I've seen in a
| long time*. If you're going to comment, please make sure you've
| read and understand his argument and are engaging with it.
|
| If you dislike long-form Twitter, here you go:
| https://threadreaderapp.com/thread/1586955288061452289.html - and
| please _don 't_ comment about that here. I know it can be
| annoying, but so is having the same offtopic complaints upvoted
| to the top of every such thread. This is why we added the site
| guideline: " _Please don 't complain about tangential annoyances
| --e.g. article or website formats_" (and yes, this comment is
| also doing this. Sorry.)
|
| Similarly, please resist being baited by the sales interludes in
| the OP. They're also offtopic and, yes, annoying, but this is why
| we added the site guideline " _Please don 't pick the most
| provocative thing in an article to complain about--find something
| interesting to respond to instead._"
|
| https://news.ycombinator.com/newsguidelines.html
|
| * even more so than
| https://news.ycombinator.com/item?id=33446064, which was also
| above the median for this topic.
| rglover wrote:
| A fun idea that I'm certain no one has considered with any level
| of seriousness: don't moderate anything.
|
| Build the features to allow readers to self-moderate and make it
| "expensive" to create or run bots (e.g., make it so API access is
| limited without an excessive fee, limit screen scrapers, etc).
| The "pay to play" idea will eliminate an insane amount of the
| junk, too. Any free network is inherently going to have problems
| of chaos. Make it so you can only follow X people with a free
| account, but upgrade to follow more. Limit tweets/replies/etc
| based on this. Not only will it work, but it will remove the need
| for all of the moderation and arguments around bias.
|
| As for advertisers (why any moderation is necessary in the first
| place beyond totalitarian thought control): have different tiers
| of quality. If you want a higher quality audience, pay more. If
| you're more concerned about broad reach (even if that means
| getting junk users), pay less. Beyond that, advertisers/brands
| should set their expectations closer to reality: randomly
| appearing alongside some tasteless stuff on Twitter does not mean
| you're _vouching_ for those ideas.
| munificent wrote:
| _> Build the features to allow readers to self-moderate_
|
| This is effectively impossible because of the bullshit
| asymmetry principle[1]. It's easier to create content that
| needs moderation than it is to moderate it. In general, there
| is a fundamental asymmetry to life that it takes less effort to
| destroy than it does to create, less work to harm than heal.
| With a slightly sharpened piece of metal and about a newton of
| force, you can end a life. No amount of effort can resurrect
| it.
|
| It simply doesn't scale to let bad actors cause all the harm
| they want and rely on good actors to clean up their messes
| after the fact. The harm must be prevented before it does
| damage.
|
| _> make it "expensive" to create or run bots (e.g., make it
| so API access is limited without an excessive fee, limit screen
| scrapers, etc)._
|
| The simplest approach would be no API at all, but that won't
| stop scammers and bad actors. It's effectively impossible to
| prohibit screen scrapers.
|
| _> Make it so you can only follow X people with a free
| account, but upgrade to follow more. Limit tweets /replies/etc
| based on this. Not only will it work, but it will remove the
| need for all of the moderation and arguments around bias._
|
| This is, I think, the best idea. If having an identity and
| sharing content costs actual money, you can at least make
| spamming not be cost effective. But that still doesn't
| eliminate human bad actors griefing others. Some are happy to
| pay to cause mayhem.
|
| There is no simple technical solution here. Fundamentally, the
| value proposition of a community is the other good people you
| get to connect to. But some people are harmful. They may not
| always be harmful, or may be harmful only to some people. For a
| community to thrive, you've got to encourage the good behaviors
| and police the bad ones. That takes work and human judgement.
|
| [1]: https://en.wikipedia.org/wiki/Brandolini%27s_law
| rglover wrote:
| > But some people are harmful. They may not always be
| harmful, or may be harmful only to some people.
|
| This is a fundamental reality of life that cannot be avoided.
| There is no magical solution (technical or otherwise) to
| prevent this. At best, you can put in some basic safeguards
| (like what you/I have stated above) but ultimately people
| need to learn to accept that you can't make everything 100%
| safe.
|
| Also, things like muting/blocking work but the ugly truth is
| that people love the negative excitement of fighting online
| (it's an outlet for life's pressure/disappointments).
| Accepting _that_ reality would do a lot of people a lot of
| good. A staggering amount of the negativity one encounters on
| social media is self-inflicted by either provoking or
| engaging with being provoked.
| etchalon wrote:
| 1. There are plenty of places online that "don't moderate
| anything". In fact, nearly all of the social networks started
| off that way.
|
| The end result is ... well, 4Chan.
|
| 2. "Self-moderation" doesn't work, because it's work. User's
| don't want to have constantly police their feeds and block
| people, topics, sites, etc. It's also work that never ends. Bad
| actors jump from one identity to the next. There are no
| "static" identifiers that are reliable enough for a user to
| trust.
|
| 3. Advertisers aren't going to just "accept" that their money
| is supporting content they don't want to be associated with.
| And they're also not interested in spending time white-listing
| specific accounts they "know" are good.
| rglover wrote:
| > The end result is ... well, 4Chan.
|
| And? Your opinion of whether that's bad is subjective, yet
| the people there are happy with the result (presumably, as
| they keep using/visiting it).
|
| > Self-moderation" doesn't work, because it's work.
|
| So in other words: "I'm too lazy to curate a non-threatening
| experience for myself which is my responsibility because the
| offense being taken is my own." Whether or not you're willing
| to filter things out that upset you is a personal problem,
| not a platform problem.
|
| > Advertisers aren't going to just "accept" that their money
| is supporting content they don't want to be associated with.
|
| It's not. Twitter isn't creating the content nor are they
| financing the content (e.g. like a Netflix type model). It's
| user-generated which is completely random and subject to
| chaos. If they can't handle that, they shouldn't advertise
| there (hence why a pay-to-play option is best as it prevents
| a revenue collapse for Twitter). E.g., if I I'm selling
| crucifixes, I'm not going to advertise on slutmania.com
|
| ---
|
| Ultimately, people need to quit acting like everything they
| come into contact with needs to be respectful of every
| possible issue or disagreement they have with it. It's
| irrational, entitled, and childish.
| etchalon wrote:
| 1. I didn't imply whether it was good or bad, just that the
| product you're describing already exists.
|
| 2. It's a platform problem. If you make users do work they
| don't want to do in order to make the platform pleasant to
| use, they won't do the work, the platform will not be
| pleasant to use, and they'll use a different platform that
| doesn't make them do that work.
|
| 3. "If they can't handle it, they shouldn't advertise
| there." Correct! They won't advertise there. That's the
| point.
|
| There are already unmoderated, "you do the work, not us",
| "advertisers have to know what they're getting into"
| platforms, and those platforms are niche, with small
| audiences, filled with low-tier/scam ads and are generally
| not profitable.
| lambic wrote:
| It's a problem of scale.
|
| Usenet and IRC used to be self-moderated. The mods in each
| group or channel would moderate their own userbase, ban people
| who were causing problems, step in if things were getting too
| heated. At a broader level net admins dealt with the spam
| problem system wide, coordinating in groups in the news.admin
| hierarchy or similar channels in IRC.
|
| This worked fine for many years, but then the internet got big.
| Those volunteer moderators and administrators could no longer
| keep up with the flood of content. Usenet died (yes, it's still
| around, but it's dead as any kind of discussion forum) and IRC
| is a shell of its former self.
| rglover wrote:
| Right, which is solved by the pay to play limits. This would
| essentially cut the problem off immediately and it would be
| of benefit to everyone. If it actually cost people to "do bad
| stuff" (post spam, vitriol, etc), they're far less-likely to
| do it as the incentives drop off.
|
| The dragon folks seem to be chasing is that Twitter should be
| free but perfect (which is a have your cake and eat it too
| problem). That will never happen and it only invites more
| unnecessary strife between sociopolitical and socioeconomic
| factions as they battle for narrative control.
| invalidusernam3 wrote:
| Just add a dislike button and put controversial tweets collapsed
| at the bottom. It works well for reddit. Let the community
| moderate themselves.
| threeseed wrote:
| Reddit tried to just let communities moderate themselves.
|
| It resulted in rampant child pornography, doxxing, death
| threats, gory violence etc. It epitomised the worst of
| humanity.
|
| Now the Reddit admins keep a watch on moderators and if their
| subreddits do not meet site-wide standards they are replaced.
| pessimizer wrote:
| > It resulted in rampant child pornography, doxxing, death
| threats, gory violence etc. It epitomised the worst of
| humanity.
|
| It resulted in reddit. That style of moderation is how reddit
| became reddit; so it should also get credit for whatever you
| think is good about reddit. The new (half-decade old) reddit
| moderation regime was a new venture that was hoping to retain
| users who were initially attracted by the old moderation
| regime.
| threeseed wrote:
| This is revisionist history.
|
| My Reddit account is 16 years old. I was there in the very
| early days of the site well before the Digg invasion and
| well before it gained widespread popularity.
|
| It was never because it allowed anything. It was because it
| was a much more accessible version of Slashdot. And it was
| because Digg did their redesign and it ended up with a
| critical mass of users. Then they started opening up the
| subreddits and it exploded from there.
|
| The fact that Reddit is growing without that content shows
| that it wasn't that important to begin with.
| pixl97 wrote:
| You mean it resulted in the place that couldn't pay the
| bills and goes around asking for VC money to keep the
| servers on?
|
| Unmoderated hell holes tend to have to survive on
| questionable funding and rarely grow to any size.
| SV_BubbleTime wrote:
| If there is one thing I know about tech companies in the
| last 20 years, it's that they never want VC money unless
| they are in trouble... right?
| thrown_22 wrote:
| It resulted in people _saying_ all those things happened, but
| never did.
| threeseed wrote:
| You mean like this list of banned subreddits:
|
| https://en.wikipedia.org/wiki/Controversial_Reddit_communit
| i...
|
| "The community (Beatingwomen), which featured graphic
| depictions of violence against women, was banned after its
| moderators were found to be sharing users' personal
| information online"
|
| "According to Reddit administrators, photos of gymnast
| McKayla Maroney and MTV actress Liz Lee, shared to 130,000
| people on popular forum r/TheFappening, constitute child
| pornography"
| thrown_22 wrote:
| You mean like the people who are telling us that happened
| also said:
|
| > CNN is not publishing "HanA*holeSolo's" name because he
| is a private citizen who has issued an extensive
| statement of apology, showed his remorse by saying he has
| taken down all his offending posts, and because he said
| he is not going to repeat this ugly behavior on social
| media again. In addition, he said his statement could
| serve as an example to others not to do the same.
|
| >CNN reserves the right to publish his identity should
| any of that change.
|
| https://edition.cnn.com/2017/07/04/politics/kfile-reddit-
| use...
|
| Yeah, I totally trust these people to not lie.
| tick_tock_tick wrote:
| And Reddit was still a better site back then.
|
| Their anything goes policy is also a huge part of what made
| them successful back in the day.
| P_I_Staker wrote:
| I think it's sad that this seems to be getting so many
| downvotes. You don't have to agree, but this was helpful
| commentary.
|
| Reddit definitely had all of these issues, and they were
| handled horribly.
| rchaud wrote:
| Digg handled things horribly. Reddit seems to have done
| just fine.
| dncornholio wrote:
| So instead of moderating users, you moderate moderators,
| still seems like a net win.
| [deleted]
| tjoff wrote:
| > _Machine learning algorithms are able to accurate identify
| spam_
|
| Nope. Not even close.
|
| > _and it`s not because they are able to tell it`s about Viagra
| or mortgage refinancing_
|
| Funny, because they can't even tell that.
|
| Which is why mail is being ruined by google and microsoft. Yes
| you could argue that they have incentives to do just that. But
| that doesn't change the fact that they can't identify spam.
| sbarre wrote:
| Do you have more info on why you believe this?
|
| My experience has been that Google successfully filters spam
| from my Inbox, consistently.
|
| I get (just looked) 30-40 spam messages a day. I've been on
| Gmail since the invite-only days, so I'm in a lot of lists I
| guess..
|
| Very Very rarely do they get through the filter.
|
| I also check it every couple of days to look for false-
| positives, and maybe once a month or less I find a newsletter
| or automated promo email in there for something I was actually
| signed up for, but never anything critical.
| [deleted]
| tjoff wrote:
| Just see what gets through and more importantly which valid
| mails are being marked as spam. It is evident that they
| haven't got a clue.
|
| So, how do they "solve" it? By using the "reputation" of your
| IP addresses and trust that more than the content of the
| email.
| zimpenfish wrote:
| I've got 6 mails in my gmail spam (in the last month) - 2
| of them are not spam which is about normal for what I see
| (30-40% non-spam.)
| tjoff wrote:
| Yet most people don't seem to ever look in their spam
| folder. Conclusion: gmail has a great spam-filter! :(
| ketzo wrote:
| You're talking way too hyperbolically to take seriously.
|
| Yes, GMail does, in fact, "have a clue." They do pretty
| well. They're not perfect, and I have specific complaints,
| but to pretend they're totally clueless and inept
| discredits anything else you're saying.
| tjoff wrote:
| Just as saying that machine learning can identify spam
| discredits anything else ex-reddit CEO says.
|
| I'm sure gmail have a clue from their point of view, but
| those doesn't align with mine (nor, I'd argue, most of
| their users). Their view also as a coincidence happens to
| strengthen their hold on the market but who cares?
| fulafel wrote:
| There seem to be no mention of (de)centralization or use of
| reputation in the comments here or in the twitter thread.
|
| Everyone is discussing a failure mode of a centralized and
| centrally moderated system and aren't questioning those
| properties, but it's really counter to traditional internet based
| communication platforms like email, usenet, irc etc.
| excite1997 wrote:
| He frames this as a behavior problem, not content problem. The
| claim is that your objective as a moderator should to get rid of
| users or behaviors that are bad for your platform, in the sense
| that they may drive users away or make them less happy. And that
| if you do that, you supposedly end up with a fundamentally robust
| and apolitical approach to moderation. He then proceeds to blame
| others for misunderstanding this model when the outcomes appear
| politicized.
|
| I think there is a gaping flaw in this reasoning. Sometimes, what
| drives your users away or makes them less happy _is_ challenging
| the cultural dogma of a particular community, and at that point,
| the utilitarian argument breaks down. If you 're on Reddit, go to
| /r/communism and post a good-faith critique of communism... or go
| to /r/gunsarecool and ask a pro-gun-tinged question about self-
| defense. You will get banned without any warning. But that ban
| passes the test outlined by the OP: the community does not want
| to talk about it precisely because it would anger and frustrate
| people, and they have no way of telling you apart from dozens of
| concern trolls who show up every week. So they proactively
| suppress dissent because they can predict the ultimate outcome.
| They're not wrong.
|
| And that happens everywhere; Twitter has scientifically-sounding
| and seemingly objective moderation criteria, but they don't lead
| to uniform political outcomes.
|
| Once you move past the basics - getting rid of patently malicious
| / inauthentic engagement - moderation becomes politics. There's
| no point in pretending otherwise. And if you run a platform like
| Twitter, you will be asked to do that kind of moderation - by
| your advertisers, by your users, by your employees.
| Atheros wrote:
| > Challenging the cultural dogma [doesn't work]
|
| That is a byproduct of Reddit specifically. With 90s style
| forums, this kind of discussion happens just fine because it
| ends up being limited to a few threads. On Reddit, all
| community members _must_ interact in the threads posted in the
| last day or two. After two days they are gone and all previous
| discussion is effectively lost. So maybe this can be fixed by
| having sub-reddits sort topics by continuing engagement rather
| than just by age and upvotes.
|
| A good feature would be for Reddit moderators to be able to set
| the desired newness for their subreddit. /r/aww should strive
| for one or two days of newness (today's status quo). But
| /r/communism can have one year of newness. That way the
| concerned people and concern trolls can be relegated to the
| yearly threads full of good-faith critiques of communism and
| the good-faith responses and everyone else can read the highly
| upvoted discussion. Everything else could fall in-between.
| /r/woodworking, which is now just people posting pictures of
| their creations, could split: set the newness to four months
| and be full of useful advice; set the newness for
| /woodworking_pics to two days to experience the subreddit like
| it is now. I feel like that would solve a lot of issues.
| bombcar wrote:
| The whole idea of "containment threads" is a powerful one
| that works very well in older-style forums, but not nearly as
| well on Reddit. "containment subs" isn't the same thing at
| all, and the subs that try to run subsubs dedicated to the
| containment issues usually find they die out.
| rootusrootus wrote:
| Having read everything he wrote, it makes it interesting to see
| how the discussion on HN matches.
| cwkoss wrote:
| Yishan could really benefit from some self editing. There are
| like 5 tweets worth of interesting content in this hundred tweet
| meandering thread.
| bruce343434 wrote:
| It might just be an effect of the medium.
| bombcar wrote:
| I mean it's clearly obviously designed to get people to read
| the ads he has in it.
| [deleted]
| MichaelZuo wrote:
| There are some neat ideas raised by Yishan.
|
| One is 'put up or shutup' for appeals of moderator decisions.
|
| That is anyone who wishes to appeal needs to also consent to have
| all their activities on the platform, relevant to the decision,
| revealed publicly.
|
| It definitely could prevent later accusations of secretiveness or
| arbitrariness. And it probably would also make users think more
| in marginal cases before submitting.
| wyldberry wrote:
| This also used to be relatively popular in the early days of
| League of Legends, people requesting a "Lyte Smite". Players
| would make inflammatory posts on the forums saying they were
| banned wrongly, and Lyte would come in with the chatlogs,
| sometimes escalating to perma-ban. I did always admire this
| system and thought it could be improved.
|
| There's also a lot of drama around Lyte in his personal life,
| should you choose to go looking into that.
| cloverich wrote:
| It is expensive to do, because you have to ensure the content
| being made public doesn't dox / hurt someone other than the
| poster. But I think you could add two things to the recipe. 1 -
| real user validation. So the banned user can't easily make
| another account. Obviously not easy and perhaps not even
| possible, but essential. 2 - increased stake. Protest a short
| ban, and if you lose, you get an even longer ban.
| TulliusCicero wrote:
| I've never understood that idea that PM's on a platform must be
| held purely private by the platform even in cases where:
|
| * There's some moderation dispute that involves the PM's
|
| * At least one of the parties involved consents to release the
| PM's
|
| The latter is the critical bit, to me. When you send someone a
| chat message, or an email, obviously there's nothing actually
| stopping them from sharing the content of the message with
| others if they feel that way, either legally or technically. If
| an aggrieved party wants to share a PM, everyone knows they can
| do so -- the only question mark is that they may have faked it.
|
| To me the answer here seems obvious: allow users to mark a
| PM/thread as publicly visible. This doesn't make it more public
| than it otherwise could be, it just lets other people verify
| the authenticity, that they're not making shit up.
| whitexn--g28h wrote:
| This is something that occurs on twitch streams sometimes.
| While it can be educational for users to see why they were
| banned, some appeals are just attention seeking. Occasionally
| though it exposes the banned user's or worse a victim users
| personal information, (eg mental health issues, age, location)
| and can lead to both users being targeted and bad behaviour by
| the audience. For example Bob is banned for bad behaviour
| towards Alice (threats, doxxing), by making that public you are
| not just impacting Bob, but could also put Alice at risk.
| etchalon wrote:
| I think this idea rests on the foundation of "shame."
|
| But there are entire groups of users that not only don't feel
| shame about their activities, but are proud of them.
| codemonkey-zeta wrote:
| But those users would be left alone in their pride in the
| put-up-or-shut-up model, because everybody else would see the
| mistakes of that user and abandon them. So the shame doesn't
| have to be effective for the individual, it just has to
| convince the majority that the user is in the wrong.
| kelnos wrote:
| Right. To put it another way, this "put up or shut up"
| system, in my mind, isn't even really there to convince the
| person who got moderated that they were in the wrong. It's
| to convince the rest of the community that the moderation
| decision was unbiased and correct.
|
| These news articles about "platform X censors people with
| political views Y" are about generating mass outrage from a
| comparatively small number of moderation decisions. While
| sure, it would be good for the people who are targeted by
| those moderation decisions to realize "yeah, ok, you're
| right, I was being a butthole", I think it's much more
| important to try to show the reactionary angry mob that
| things are aboveboard.
| etchalon wrote:
| The most high profile, and controversial, "moderation"
| decisions made by large platforms recently have generally
| been for obvious, and very public, reasons.
| shashanoid wrote:
| kahrl wrote:
| Yishan Wong is an American engineer and entrepreneur who was
| CEO of Reddit from March 2012 until his resignation in November
| 2014.
|
| Did you need help looking that up? Or were you just being edgy?
| ilyt wrote:
| It's kinda funny that many of the problems he's mentioning is
| exactly how moderation on reddit currently works.
|
| Hell, newly revamped "block user" mode got extra gaslighting as a
| feature, now person blocked can't reply to _anyone_ under the
| comment of person that blocked them, not just the person that
| blocked them so anyone that doesn 't like people discussing how
| they are wrong can just ban the people that disagree with them
| and they will not be able to answer to any of their comments.
| Ztynovovk wrote:
| Seems reasonable to me. IRL I can walk away from a voluntary
| discussion when I want. If people want to continue talking
| after I've left they can form their own discussion group and
| continue with the topic.
|
| Think this is good because it usually stops a discussion from
| dissolving into a meaningless flame war.
|
| It allows the power of moderation to stay within the power of
| those in the discussion.
| scraptor wrote:
| Now imagine if some random other people in the group who
| happen to have posts higher in the tree were able to silently
| remove you without anyone knowing.
| Ztynovovk wrote:
| Meh, it's the most reasonable bad solution imo. I've had
| some pretty heated convos on reddit and have only ever been
| blocked once.
| chinchilla2020 wrote:
| The tweetstorm format is such a horrible way to consume articles.
| I cannot wait for twitter to collapse so I never have to read
| another essay composed of 144-word paragraphs.
| swarnie wrote:
| Twitter has to be the worst possible medium for reading an essay.
| joemi wrote:
| You're far from the only person who thinks this, but please see
| dang's stickied comment at the top of the thread.
| ItsBob wrote:
| Here's a radical idea: let me moderate my own shit!
|
| Twitter is a subscription-based system (by this, I mean that I
| have to subscribe to someone's content) so if I subscribe to
| someone and don't like what they say then buh-bye!
|
| Let me right click on a comment/tweet (I don't use social media
| so not sure of the exact terminology the kids use these days)
| with the options of:
|
| - Hide this comment
|
| - Hide all comments in this thread from
|
| - Block all comments in future from (you can undo this in
| settings).
|
| That would work for me.
| threeseed wrote:
| You're not Twitter's customer. Advertisers are.
|
| And they don't want their brands to be associated with
| unpleasant content.
| q1w2 wrote:
| To quote the article...
|
| > MAYBE sometimes an advertiser will get mad, but a backroom
| sales conversation will usually get them back once the whole
| thing blows over.
| lettergram wrote:
| People producing products don't actually care. I'd love to
| see stats on this made public (I've seen internal metrics).
| Facebook and Twitter don't even show you what your ad is
| near. You fundamentally just have to "trust" them.
|
| If you have a billboard with someone being raped beneath it
| and a photo goes viral, no one would blame the company
| advertising on the billboard. Frankly, no one will associate
| the two to change their purchasing habits.
|
| The reason corporations care are the ESG scores and activist
| employees.
|
| Also these brands still advertise in places where public
| executions will happen (Saudi Arabia). No one is complaining
| there.
| pfisch wrote:
| People do care. If you openly associate your brand with
| strong support for a pro pedophile or pro rape position
| customers will care about that.
|
| The idea that they won't seems pretty ridiculous.
| threeseed wrote:
| > Facebook and Twitter don't even show you what your ad is
| near
|
| But their customers complain about it, media picks it up
| and it becomes an outrage story.
|
| That's what brands are scared of.
| Spivak wrote:
| Like I can't believe that this reasoning doesn't resonate
| with people even outside of advertisers. Who wants to be on a
| social network where if one of your posts breaks containment
| you spend the next few weeks getting harassed by people who
| just hurl slurs and insults at you. This is already right now
| a problem on Twitter and opening the floodgates is the
| opposite of helping.
| etchalon wrote:
| This reasoning is generally lost on people whom are
| generally not a target for slurs and harassment.
| tick_tock_tick wrote:
| > few weeks getting harassed by people who just hurl slurs
| and insults at you
|
| Just ignore it or block them. The only time it's an issue
| is when you engage. Seriously the only people with this
| issue can't let shit go.
| fzeroracer wrote:
| I feel like you don't understand the issue here at all.
|
| Blocking them requires first engaging with their content.
| This is what people always miss in the discussion. To
| know if you need to block someone or not involves parsing
| their comment and then throwing it in the bin.
|
| The same goes for ignoring it. And eventually people get
| tired of the barrage of slurs and just leave because the
| brainpower required to sift through garbage isn't worth
| it anymore. That's how you end up with places like Voat.
| tick_tock_tick wrote:
| Most customers don't care the only reason it's a real issue
| is Twitter users run the marketing department at a lot of
| companies and they are incorrectly convinced people care.
| spaced-out wrote:
| >Most customers don't care
|
| How much is "most"? What data do you have? Plus, even if
| ~20% of customers care and only half will boycott, that's
| still going to have an impact on the company's bottom line.
| threeseed wrote:
| > Twitter users run the marketing department
|
| If that were the case why is Twitter ad spend in the low
| single digits for most companies.
| dbbk wrote:
| This exists.
| spaced-out wrote:
| Maybe you might like that, but I personally don't want to wade
| through dozens of "MAKE BIG $$$$ WORKING FROM HOME!!!" every
| morning on my feed.
| int_19h wrote:
| This is solved by allowing people to "hire" others as their
| moderators.
| tedunangst wrote:
| Why can't I "hire" (join) a social network that
| preemptively mods according to my preferences?
| int_19h wrote:
| Because there are too few, due to market dominance of
| existing players?
| AhmedF wrote:
| Try moderating 100+ hateful messages an hour.
| AceJohnny2 wrote:
| You've never been targeted for harassment, obviously.
|
| Blocking a comment, or even blocking a user for a comment is
| useless on platforms that allow free and endless user accounts.
|
| Mail spam/scam folders of everyone's email accounts are proof
| that "let me moderate it myself" does not work for the majority
| of people.
|
| And remember "It is harder to police bad behavior than it is to
| automate it."
| commandlinefan wrote:
| > let me moderate it myself
|
| More like "let us moderate it ourselves". Reddit users
| already do this - there are extensions you can install that
| allow you to subscribe to another group of user's ban list.
| So you find a "hivemind" that you mostly agree with, join
| their collective moderation, and allow that to customize the
| content you like. The beauty is that _you_ get to pick the
| group you find most reasonable.
| pjc50 wrote:
| > - Block all comments in future from (you can undo this
| in settings).
|
| This is what the existing block feature does?
| dimva wrote:
| His argument makes no sense. If this is indeed why they are
| banning people, why keep the reasoning a secret? Honestly, every
| ban should come with a public explanation from the network, in
| order to deter similar behavior. The way things are right now,
| it's unclear if, when, and for what reason someone will be
| banned. People get banned all the time with little explanation or
| explanations that make no sense or are inconsistent. There is no
| guidance from Twitter on what behavior or content or whatever
| will get you banned. Why is some rando who never worked at
| Twitter explaining why Twitter bans users?
|
| And how does Yishan know why Twitter bans people? And why should
| we trust that he knows? As far as I can tell, bans are almost
| completely random because they are enacted by random low-wage
| contract workers in a foreign country with a weak grasp of
| English and a poor understanding of Twitter's content policy (if
| there even is one).
|
| Unlike what Yishan claims, it doesn't seem to me like Twitter
| cares at all about how pleasant an experience using Twitter is,
| only that its users remain addicted to outrage and calling-out
| others, which is why most Twitter power-users refer to it as a
| "hellsite".
| sangnoir wrote:
| > Honestly, every ban should come with a public explanation
| from the network, in order to deter similar behavior
|
| This only works on non-adversarial systems. Anywhere else, it
| will be like handing over to bad actors (i.e. people whose
| interests will _never_ align with operator 's) a list of
| blindspots
| noasaservice wrote:
| "You have been found guilty of crimes in $State. Please
| submit yourself to $state_prison on the beginning of the next
| month. We're sorry, but we cannot tell you what you are
| guilty of."
| vkou wrote:
| "Look, I'd like you to stop being a guest in my house,
| you're being an asshole."
|
| "PLEASE ENUMERATE WHICH EXACT RULES I HAVE BROKEN AND
| PROVIDE ME WITH AN IMPARTIAL AVENUE FOR APPEAL."
|
| ---
|
| When you're on a platform, you are a guest. When you live
| in society, you don't have a choice about following the
| rules. That's why most legal systems provide you with clear
| avenues for redress and appeal in the latter, but most
| private property does not.
| 10000truths wrote:
| Imagine for a moment what would happen if this rationale were
| extended to the criminal justice system. Due process is
| sanctified in law for a good reason. Incontestable
| assumptions of adversarial intent are the slow but sure path
| to the degradation of any community.
|
| There will _always_ be blind spots and malicious actors, no
| matter how you structure your policies on content moderation.
| Maintaining a thriving and productive community requires
| active, human effort. Automated systems can be used to
| counteract automated abuses, but at the end of the day, you
| need _human_ discretion /judgement to fill those blind spots,
| adjust moderation policies, proactively identify
| troublemakers, and keep an eye on people toeing the line.
| Spivak wrote:
| Being cagey about the reasons for bans is
|
| 1. To keep people from cozying up to the electric fence. If
| you don't know where the fence is you'll probably not risk
| a shock trying to find it. There are other ways one can
| accomplish this like bringing the banhammer down on
| everyone near the fence every so often very publicly but
| point 2 kinda makes that suck.
|
| 2. To not make every single ban a dog and pony show when
| it's circulated around the blogspam sites.
|
| I'm not gonna pass judgement as to whether it's a good
| thing or not but it's not at all surprising that companies
| plead the 5th in the court of public opinion.
| bink wrote:
| Sorta related to (1) but not really: there are also more
| "advanced" detection techniques that most sites use to
| identify things like ban evasion and harassment using
| multiple accounts. If they say "we identified that you
| are the same person using this other account and have
| reason to believe you've created this new account solely
| to evade that ban" then people will start to learn what
| techniques are being used to identify multiple accounts
| and get better at evading detection.
| sangnoir wrote:
| > Imagine for a moment what would happen if this rationale
| were extended to the criminal justice system.
|
| It already is!
|
| The criminal justice system is a perfect example of why
| total information transparency is a terrible idea: _never
| talk to the cops_ even if they just want to "get one thing
| cleared up" - your intentions don't matter, you're being
| given more rope to hang yourself with.
|
| It's an adversarial system where transparency gets you
| little, but gains your adversary a whole lot. You should
| not ever explain your every action and reasoning to the
| cops without your lawyer telling you when to STFU.
|
| Due process is sanctified, but the criminal justice system
| is self-aware enough to recognize that self-incrimination
| is a hazard, and rightly does not place the burden on the
| investigated/accused, why should other adversarial system
| do less?
| ascv wrote:
| Honestly it seems like you didn't read the thread. He's not
| talking about how Twitter itself works but about problems in
| moderation more generally based on his experience at Reddit.
| Also, he specifically advocates public disclosure on ban
| justifications (though acknowledges it is a lot of work).
| dang wrote:
| He also makes an important and little-understood point about
| asymmetry: the person who posts complaints about being
| treated unfairly can say whatever they want about how they
| feel they were treated, whereas the moderation side usually
| can't disclose everything that happened, even when it would
| disprove what that user is saying, because it's operating
| under different constraints (e.g. privacy concerns).
| Ironically, sometimes those constraints are there to protect
| the very person who is making false and dramatic claims. It
| sucks to be on that side of the equation but it's how the
| game is played and the only thing you can really do is learn
| how to take a punch.
| roblabla wrote:
| From my understanding, he's not claiming this is how twitter
| currently works. He's offering advice about how to solve
| content moderation on twitter.
| dontknowwhyihn wrote:
| He's offering advice that differs from what Reddit does in
| practice. They absolutely ban content rather than behavior.
| Try questioning "the science" and it doesn't matter how
| considerate you are, you will be banned.
| CountHackulus wrote:
| He covers that further down in the tweets, near the end of
| the thread. He doesn't necessarily agree with the Reddit
| way of doing things, but it has interesting compromises wrt
| privacy.
| pixl97 wrote:
| Because no one has developed a moderation framework based
| on behavior. Content is (somewhat) easy, a simple regex can
| capture that. Behavior is far more complicated and even
| more subject to our biases.
| bitxbitxbitcoin wrote:
| He's specifically referring to Reddit's content moderation
| which actually has two levels of bans. Bans by mods from a
| specific subreddit are done by mods from that specific
| subreddit and having an explanation isn't required but is
| sometimes given - these bans apply to just the subreddit and
| are more akin to a block by the community. Bans by admins
| happen to people that have been breaking a site rule, not a
| subreddit rule.
|
| Both types of bans have privacy issues that result in lack of
| transparency with bans.
| matchagaucho wrote:
| tldr; Many posts on social media are "spam". Nobody objects to
| spam filters.
|
| Therefore, treat certain types of content as spam (based on
| metadata, not moderators).
| ufo wrote:
| In the US, where Twitter & Facebook are dominant, the current
| consensus in the public mind is that political polarization and
| radicalization are driven by the social media algorithms.
| However, I have always felt that this explanation was lacking.
| Here in Brazil we have many of the same problems but the dominant
| social media are Whatsapp group chats, which have no algorithms
| whatsoever (other than invisible spam filters). I think Yishan is
| hitting the nail on the head by focusing the discussion on user
| behavior instead of on the content itself.
| MikePlacid wrote:
| > I think Yishan is nailing the nail on the head by focusing
| the discussion on user behavior instead of on the content
| itself.
|
| But user behavior problem can be solved cheaply, easily and in
| a scalable way:
|
| Give each user an ability to form the personal filter.
| Basically, all what I need is:
|
| 1. I want to read what person A writes - always.
|
| 2. I want to read what person B writes - always, except when
| talking to person C.
|
| 3. I want to peek through a filter of the person I like - to
| discover more interesting to me persons.
|
| 4. Show me random people posts like 3-4 (configurable) times
| per day.
|
| This is basically how my spinal brain worked in unmoderated
| Fido-over-Usenet groups. Some server help will be great, sure,
| but there is nothing here that is expensive or not scalable.
| PS: centralized filtering is needed only when you are going
| after some content, not noise.
| ufo wrote:
| I disagree, we can't frame this discussion on only the
| content. My whatsapp "feed" is doing just fine. The problem
| are all the other whatsapp groups that I'm not in, which are
| filled with hateful politics. It hurts to when you meet in
| real life a friend that you haven't met in a while, and then
| find out that they've been radicalized.
|
| The radical Bolsonaro whatsapp groups are a mix of top down
| and grass roots content. On one end there is the central
| "propaganda office", or other top political articulators. On
| the bottom are the grassroots group chats: neightborhoods,
| churches, biker communities, office mates, etc. Memes and
| viral content flow in both directions. The contents and
| messages that ressonate in the lower levels get distributed
| by the central articulators, which have a hierarchy of group
| chats to circulate new propaganda as widely as possible. You
| can see this happen in real time when a political conundrum
| happens,e.g a corruption scandal. The central office will A-B
| test various messages in their group chats and then the one
| that resonates better with their base gets amplified and
| suddenly they manage to "change the topic" on the news. The
| end result is that we just had 4 years of political chaos,
| where the modus operandi of the goverment was to put out
| fires by deflecting the public discourse whenever a new
| crisis emerged. It's not a problem of content itself, that
| could be solved by a better filtration algorithm. It's a
| deeper issue having to do with how quickly memetic ideas can
| spread in this new world of social media.
| originalvichy wrote:
| I actually went into a deep dive of any statistical efforts
| that showed bans on twitter based on American political
| leanings.
|
| Apparently in both studies I found the most statistically
| significant user behavior for bans was if the user had a
| tendency to post articles from low quality online "news" sites.
|
| So essentially even the political controversy around moderation
| boils down to the fact that one side, the right, is happily
| posting low quality news/fake news that they either get banned
| for disinformation or other rule-breaking behavior.
|
| https://psyarxiv.com/ay9q5
| leereeves wrote:
| The first concern is identifying "fake news".
|
| One of the biggest political news stories of 2020, Hunter
| Biden's laptop, was falsely declared misinformation, and the
| NY Post was accused of being a low quality site. Now we know
| it's true.
|
| On the other hand, the Steele Dossier was considered
| legitimate news at the time and "many of the dossier's most
| explosive claims...have never materialized or have been
| proved false."[1].
|
| So I'd like to know exactly what the study's authors
| considered low-quality news, but unfortunately I couldn't
| find a list in the paper you linked. In my experience, most
| people tend to declare sources "high-quality" or "low-
| quality" based on whether they share the same worldview.
|
| 1: https://www.nytimes.com/2021/05/15/business/media/spooked-
| pr...
| haberman wrote:
| I had the same skepticism as you, but the study authors did
| attempt to be fair by letting a set of politically balanced
| laypeople (equal number Democrat and Republican) adjudicate
| the trustworthiness of sites. They also had journalists
| rate the sites, but they present the results of both
| results (layperson-rated and journalist-rated).
|
| I wish they had included the list so we could see for
| ourselves. It's still possible that there are flaws in the
| study. But the it appears to take the concern of fairness
| more seriously than most.
| nfgivivu wrote:
| lifeisstillgood wrote:
| This to me is a vital point.
|
| One of the things rarely touched on about Twitter / FB et al
| are that they are transmission platforms and then a discovery /
| recommendation layer on top.
|
| The "algorithm" is this layer on top and it is assumed that
| this actively sorts people into their bubbles and people
| passively follow - there is much discussion about splitting the
| companies AT&T style to improve matters.
|
| But countries where much of the discourse is on WhatsApp do not
| have WhatsApp to do this recommendation - it is done IRL
| (organically) - and people actively sort themselves.
|
| The problem is not (just) the social media companies. It lies
| in us.
|
| The solution if we are all mired in the gutter of social media,
| is to look up and reach for the stars.
| monksy wrote:
| > No, what`s really going to happen is that everyone on council
| of wise elders will get tons of death threats, eventually quit...
|
| Yep if you can't stand being called an n* (or other racial slurs)
| don't be a reddit moderator. Also I've been called a hillary boot
| licker and a trump one.
|
| Being a reddit moderator isn't for the thin of skinned.I hosted
| social meetups so this could have run out in the real
| world..Luckily I had a strong social support in the group where
| that would have been taken care of real quick. I've only had one
| guy that tried to threaten to come and be disruptive at one of
| the meetups. He did come out. He did meet me.
|
| ----
|
| > even outright flamewars are typically beneficial for a small
| social network:
|
| He's absolutely correct. It also helps to define community
| boundries and avoid extremism. A lot of this "don't be mean"
| culture only endorses moderators stepping in and dictating how a
| community talks and how people who disagree are officially
| bullied.
| fuckHNtho wrote:
| tldr tangential babbling that HN protects and wants us to
| admire...because reddit YC darlings. it almost makes me feel
| nostalgic.
|
| Why are we to take yishan as an authority on content moderation,
| have you BEEN to reddit?! the kind of moderation of repetitive
| content he's referring to is clearly not done AT ALL.
|
| He does not put forth any constructive advice. be "operationally
| excellent". ok, thanks. you're wrong about spam. you're wrong
| about content moderation. ok, thanks. who is his audience? he's
| condescending the people who are dialed into online discourse
| inbetween finding new fun ways to plant trees and design an
| indulgent hawaiian palace. i expected more insight, to be honest.
| but time and time again we find the people at the top of internet
| companies are disappointingly common in their perspective on the
| world. they just happened to build something great once and it
| earned them a lifetime soapbox ticket.
|
| ok, thanks.
| P_I_Staker wrote:
| Key word here: ex (joking)... but seriously I'm absolutely
| baffled why someone would look to a former reddit exec for advice
| on moderation.
|
| I guess you could say that they have experience, having made all
| the mistakes, and figured it out through trial and error! This
| seems to be his angle.
|
| What I got from the whole reddit saga is how horrible the
| decision making was, and won't be looking to them for sage
| advice. These people are an absolute joke.
| mikkergp wrote:
| Who is doing a good job at scale? Is there really anyone we can
| look to other than people who "have experience, having made all
| the mistakes, and figured it out through trial and error"?
| P_I_Staker wrote:
| Sorry if this wasn't clear, but that's just his perspective.
| Mine is that they're a bunch of clowns with little to offer
| anyone. Who cares what this person thinks more than you, I,
| or a player for the Miami Dolphins.
|
| Twitter is going to have to moderate at least exploitative
| and a ton of abusive content, eventually. I don't understand
| how this rant is helpful in the slightest. Seemed like a
| bunch of mumbo jumbo.
|
| You do have a good point about there not being very many good
| actors, if any.
| dbbk wrote:
| Who cares what this person thinks? They actually have
| experience tackling the problem. You or I have never been
| in a position of tackling the problem. Of course I am
| interested in the experience of someone who has seen this
| problem inside and out.
| armchairhacker wrote:
| I wonder if the problems the author describes can be solved by
| artifically downvoting and not showing spam and flamewar content,
| not banning people.
|
| - Spam: don't show it to anyone, since nobody wants to see it.
| Repeatedly saying the same thing will get your posts heavily
| downvoted or just coalesced into a single post.
|
| - Flamewars: again, artifically downvote them so that your
| average viewer doesn't even see them (if they aren't naturally
| downvoted). And also discourage people from participating, maybe
| by explicitly adding the text "this seems like a stupid thing to
| argue about" onto the thread and next to the reply button. The
| users who persist in flaming each other and then get upset, at
| that point you don't really want them on your platform anyways
|
| - Insults, threats, etc: again, hide and reword them. If it
| detects someone is sending an insult or threat, collapse it into
| "" or "" so that people know the content of
| what's being sent but not the emotion (though honestly, you
| probably should ban threats altogether). You can actually do this
| for all kinds of vitriolic, provocative language. If someone
| wants to hear it, they can expand the "" bubble, the
| point is that most people probably don't.
|
| It's an interesting idea for a social network. Essentially,
| instead of banning people and posts outright, down-regulate them
| and collapse what they are saying while remaining the content. So
| their "free speech" is preserved, but they are not bothering
| anyone. If they complain about "censorship", you can point out
| that the First Amendment doesn't require anyone to hear you, and
| people _can_ hear you if they want to, but the people have
| specified and algorithm detects that they don 't.
|
| EDIT: Should also add that Reddit actually used to be like this,
| where subreddits had moderators but admins were very hands-off
| (actually just read about this yesterday). And it resulted in
| jailbait and hate subs (and though this didn't happen, could have
| resulted in dangerous subs like KiwiFarms). I want to make clear
| that I still think that content should be banned. But that
| content isn't what the author is discussing here: he is
| discussing situations where "behavior" gets people banned and
| then they complain that their (tame) content is being censored.
| Those are the people who should be down-regulated and text
| collapsed instead of banned.
| pluc wrote:
| Reddit uses an army of free labour to moderate.
| ConanRus wrote:
| jamisteven wrote:
| How about, dont moderate it? Just, let it be.
| jameskilton wrote:
| Every single social media platform that has ever existed makes
| the same fundamental mistake. They believe that they just have to
| remove or block the bad actors and bad content and that will make
| the platform good.
|
| The reality is _everyone_ , myself included, can be and will be a
| bad actor.
|
| How do you build and run a "social media" product when the very
| act of letting anyone respond to anyone with anything is itself
| the fundamental problem?
| [deleted]
| onion2k wrote:
| _The reality is everyone, myself included, can be and will be a
| bad actor._
|
| Based on this premise we can conclude that the best way to
| improve Reddit and Twitter is to block everyone.
| madeofpalk wrote:
| To be honest, I would not disagree with that. Very 'the only
| winning move is not to play'.
| PM_me_your_math wrote:
| To be honest, and maybe this will be panned, but the real
| answer is for people to grow thicker skin and stop putting
| one's feelings on a pedestal above all.
| mikkergp wrote:
| Interesting, that wasn't my interpretation of the twitter
| thread, it was more that spam and not hurtful content was
| the real tricky thing about moderating social media.
| chipotle_coyote wrote:
| Spam was more of an example than the point, I think --
| the argument Yishan is making is that moderation isn't
| for _content,_ it 's for _behavior._ The problem is that
| if bad behavior is tied to partisan and /or controversial
| content, which it often is, people react as if the
| moderation is about the content.
| pwinnski wrote:
| When _I_ make comments about _other_ people, they need to
| grow thicker skin.
|
| When _other_ people attack _me_ personally, it 's a deep
| and dangerous violation of social norms.
| madeofpalk wrote:
| I don't block people because they hurt my feelings, i
| block people because im just not interested in seeing
| bird watching content on my timeline. No one deserves my
| eyeballs.
| dang wrote:
| That's asking human nature to change, or at least asking
| almost everyone to work on their trauma until they don't
| get so activated. Neither will happen soon, so this can't
| be the real answer.
| horsawlarway wrote:
| Look - I don't even particularly disagree with you, but I
| want to point out a problem with this approach.
|
| I'm 33. I grew up playing multiplayer video games
| (including having to run a db9 COM cable across the house
| from one machine to another to play warcraft 2
| multiplayer, back when you had to explicitly pick the
| protocol for the networking in the game menu)
|
| My family worked with computers, so I had DSL since I
| have memories. I played a ton of online games. The
| communities are _BRUTAL_. They are insulting, abusive,
| misogynistic, racist, etc... the spectrum of unmonitored
| teenage angst, in all it 's ugly forms (and to be fair,
| some truly awesome folks and places).
|
| As a result - I have a really thick skin about basically
| everything said online. But a key difference between the
| late 90s and today, is that if I wanted it to stop, all I
| had to do was close the game I was playing. Done.
|
| Most social activities were in person, not online. I
| could walk to my friend's houses. I could essentially
| tune out all the bullshit by turning off my computer, and
| there was plenty of other stuff to go do where the
| computer wasn't involved at all.
|
| I'm not convinced that's enough anymore. The computer is
| in your pocket. It's always on. Your social life is
| probably half online, half in person. Your school work is
| online. Your family is online. your reputation is online
| (as evidenced by those fucking blue checkmarks). The
| abuse is now on a highway into your life, even if you
| want to turn it off.
|
| It's like the school bully is now waiting for you
| everywhere. He's not waiting at school - he's stepping
| into the private conversations you're having online. He's
| talking to your friends. He's hurling abuse at you when
| you look at your family photos. He's _in_ your life in a
| way that just wasn 't possible before.
|
| I don't think it's fair to say "Just grow a thicker skin"
| in response to that. I think growing a thicker skin is
| desperately needed, but I don't think it's sufficient.
| The problem is deeper.
|
| We have a concept for people who do the things these
| users are doing on twitter in person - They're called
| fighting words, and most times, legally (even in the US)
| there is _zero_ assumption of protected speech here. You
| say bad shit about someone with the goal of riling them
| up and no other value? You have no right of free speech,
| because you aren 't "speaking" - you're trying to start a
| fight.
|
| I'm not protecting your ability to bully someone. Full
| stop. If you want to do that, do it with the clear
| understanding that you're on your own, and regardless of
| how thick my skin is - I think you need a good slap
| upside the head. I'd cheer it on.
|
| In person - this resolves itself because the fuckwads who
| do this literally get physically beaten. Not always - but
| often enough we have a modicum of civil discussion we
| accept, and a point where no one is going to defend you
| because you were a right little cunt, and the beating was
| well deserved.
|
| I don't know how you simulate the same constraint online.
| I'm not entirely sure you can, but I think the answer
| isn't to just stop trying.
| ryandrake wrote:
| > The computer is in your pocket. It's always on. Your
| social life is probably half online, half in person. Your
| school work is online. Your family is online. your
| reputation is online (as evidenced by those fucking blue
| checkmarks). The abuse is now on a highway into your
| life, even if you want to turn it off.
|
| It is still a choice to participate online. I'm not on
| Twitter or Facebook or anything like that. It doesn't
| affect my life in the slightest. Someone could be on
| there right this minute calling me names, and it can't
| bother me because I don't see it, and I don't let it into
| my life. This is not a superpower, it's a choice to not
| engage with social media and all the ills it brings.
|
| Have I occasionally gotten hate mail from an HN post?
| Sure. I even got a physical threat over E-mail (LOL good
| luck, guy). If HN ever became as toxic as social media
| can be, I could just stop posting and reading. Problem
| solved. Online is not real if you just ignore it.
| Sohcahtoa82 wrote:
| The attitude of "If you don't like it, leave!" is
| allowing the bullies to win.
|
| Minorities, both racial and gender, should be able to use
| social media without having vitriol spewed at them
| because they're guilty of being a minority.
| pjc50 wrote:
| Paul Pelosi's skin wasn't thick enough to deflect
| hammers.
|
| (OK, that wasn't a twitter problem, but the attack on him
| was 100% a product of unmoderated media in general)
| PM_me_your_math wrote:
| I respectfully disagree. Beyond the reason that there is
| no way you can be 100% certain 'unmoderated media' was
| the primary motivator. Nobody can presume to know his
| motivations or inner dialogue. A look at that mans
| history shows clear mental health issues and self-
| destructive behavior so we can infer some things but
| never truly know.
|
| Violence exists outside of mean tweets and political
| rhetoric. People, even crazy ones, almost always have
| their own agency even if it runs contrary to what most
| consider to be normal thoughts and behavior. They choose
| to act, regardless of others and mostly without concern
| or conscious. There are crazy people out there and
| censoring others wont ever stop bad people from doing bad
| things. If so, then how do we account for the evils done
| by those prior to our inter-connected world?
| krtzulo wrote:
| We really don't know much. The attacker used drugs, his
| ex partner said the he went away for year and came back a
| changed person.
|
| He lived in a community with BLM signs and a rainbow
| flag. He did hemp jewellry.
|
| He registered a website three months ago and only
| recently filled it with standard extreme right garbage.
|
| This is all so odd that for all we know someone else
| radicalized him offline, the old fashioned way.
| rchaud wrote:
| You hit the nail on the head, but maybe the other way around.
|
| "Block" and "Mute" are the Twitter user's best friends. They
| keep the timeline free of spam, be it advertisers, or the
| growth hackers creating useless threads of Beginner 101 info
| and racking up thousands of likes.
| gorbachev wrote:
| After using several communications tools over the past
| couple of decades (BBSes, IRC, Usenet, AIM, plus the ones
| kids these days like), I'm convinced blocking and/or muting
| is required for any digital mass communication tool anyone
| other than sociopaths would use.
| MichaelZuo wrote:
| Doesn't Twitter give the option of a whitelist (Just who you
| follow + their retweets) already?
| mikkergp wrote:
| Not really, even the it still does recommended tweets and I
| don't want to see retweets or likes and you have to turn
| that off per person.
| fknorangesite wrote:
| Yes, really. I never see 'recommended' or 'so-and-so
| liked...' in my feed.
| the_only_law wrote:
| I had to create a new account after losing mine, and
| without following many people it seems like easily 30-50%
| of my feed is recommended content.
| fknorangesite wrote:
| There are "home" and "newest" feeds. I agree it's shitty
| that the default shows this stuff, but you just have to
| switch it over to "newest."
| leephillips wrote:
| Yes really: you can get this experience with Twitter:
| https://lee-phillips.org/howtotwitter/
| mikkergp wrote:
| Edit: I apologize that my original post was dismissive of
| your effort to help people use twitter.
| leephillips wrote:
| Don't worry about it. I didn't exactly understand your
| original comment, but I don't have a problem with people
| having opinions.
| MichaelZuo wrote:
| That's unnecessarily dismissive of someone trying their
| best to share some tips. It's not like they charged you
| to read it.
| mikkergp wrote:
| Fair enough it wasn't meant to be a commentary on them,
| but I will edit with an apology
| thrown_22 wrote:
| Invictus0 wrote:
| I'm not a bad actor, I only have 3 tweets and they're all
| reasonable IMO. So your premise is wrong.
| phillipcarter wrote:
| > The reality is everyone, myself included, can be and will be
| a bad actor.
|
| But you likely aren't, and most people likely aren't either.
| That's the entire premise behind removing bad actors and spaces
| that allow bad actors to grow.
| pessimizer wrote:
| > But you likely aren't, and most people likely aren't
| either.
|
| Is there any evidence of this? 1% bad content can mean that
| 1% of your users are bad actors, or it can mean that 100% of
| your users are bad actors 1% of the time (or anything in
| between.)
| munificent wrote:
| I assume all of us have evidence of this in our daily
| lives.
|
| Even the best people we know have bad days. But you have
| probably also encountered people in your life who have
| consistent patterns of being selfish, destructive, toxic,
| or harmful.
| pessimizer wrote:
| > you have probably also encountered people in your life
| who have consistent patterns of being selfish,
| destructive, toxic, or harmful.
|
| This is not evidence that most bad acts are done by bad
| people. This is evidence that I've met people who've
| annoyed or harmed me at one or more points, and projected
| my personal annoyance into my fantasies of their internal
| states or of their _essence._ Their "badness" could
| literally have only consisted of the things that bothered
| me, and during the remaining 80% of the time (that I
| wasn't concerned with) they were tutoring orphans in
| math.
|
| Somebody who is "bad" 100% of the time on twitter could
| be bad 0% of the time off twitter, and vice-versa. Other
| people's personalities aren't reactions to our values and
| feelings; they're as complex as you are.
|
| As the OP says: our definitions of "badness" in this
| context are of _commercial_ badness. Are they annoying
| our profitable users?
|
| edit: and to add a bit - if you have a diverse userbase,
| you should expect them to annoy each other at a pretty
| high rate with absolutely no malice.
| simple-thoughts wrote:
| Your logic makes sense but is not how these moderation
| services actually work. When I used my own phone number to
| create a Twitter, I was immediately banned. So instead I
| purchased an account from a service with no issues. It's
| become impossible for me at least to use large platforms
| without assistance from an expert who runs bot farms to build
| accounts that navigate the secret rules that govern bans.
| cwkoss wrote:
| Spam is a behavior, not a fundamental trait of the actor.
|
| Would be interesting to make a service where spammers have to
| do recaptcha-like spam flagging to get their account
| unlocked.
| fragmede wrote:
| Which definition of spam are you operating under? I think
| it _is_ a fundamental trait of the actor.
| pessimizer wrote:
| So you would expect a spammer to only ever post spam,
| even on their own personal account? Or a spam emailer to
| never send a personal email?
| fragmede wrote:
| I know sales bros who live their live by their ABCs -
| always be closing, but that's besides the point. if the
| person behind the spam bot one day wakes up and decides
| to do turn over a new leaf and something else with their
| life, they're not going to use the buyC1alis@vixagra.com
| email address they use for sending spam as the basis for
| their new persona. thus sending spam is inherit to the
| buyC1alis@vixagra.com identity that we see - of course
| there's a human being behind it, but as we'll never know
| them in ant other context, that is who they are to us.
| stouset wrote:
| > and spaces that allow bad actors to grow
|
| I believe that's GP's point! Any of us has the potential to
| be the bad actor in some discussion that gets us irrationally
| worked up. Maybe that chance is low for you or I, but it's
| never totally zero.
|
| And _even if_ the chance is zero for you or I specifically,
| there 's no way for the site operators to a priori know that
| fact or to be able to predict which users will suddenly
| become bad actors and which discussions will trigger it.
| pixl97 wrote:
| I think the point is that anyone and/or everyone can be a bad
| actor in the right circumstances, and moderations job is to
| prevent those circumstances.
| edgyquant wrote:
| We have laws around mobs and peaceful protest for a reason.
| Even the best people can become irrational as a group. The
| groupmind is what we need controls for: not good and bad
| people.
| dgant wrote:
| This is something Riot Games has spoken on, the observation
| that ordinary participants can have a bad day here or there,
| and that forgiving corrections can preserve their participation
| while reducing future incidents.
| synu wrote:
| Did Riot eventually sort out the toxic community? If so that
| would be amazing, and definitely relevant. I stopped playing
| when it was still there, and it was a big part of the reason
| I stopped.
| BlargMcLarg wrote:
| The only success I've seen in sorting out random vitriol is
| cutting chat off entirely and minimizing methods of passive
| aggressive communication. But Nintendo's online services
| haven't exactly scaled to the typical MOBA size to see how
| it actually works out
| ketzo wrote:
| Anecdotally: outright flame / chat hate is a bit better
| than it used to be, but not much.
| zahrc wrote:
| I've been playing very very active from 2010 to 2014 and
| since then on-off, sometimes skipping a season.
|
| Recently picked it up again and I noticed that I didn't had
| to use /mute all anymore. I've got all-chat disabled by
| default so I've got no experience there, but overall I'd
| say it has come a long way.
|
| But I'd also say it depends which mode and MMR you are in.
| I mostly play draft pick normals or ARAMs in which I both
| have a lot of games played - I heard from a mate that chat
| is unbearable in low level games.
| gambler wrote:
| It's not a mistake. It's a PR strategy. Social media companies
| are training people to blame content and each other for the
| effects that are produced by design, algorithms and moderation.
| This reassigns blame away from things that those companies
| control (but don't want to change) to things that aren't
| considered "their fault".
| stcredzero wrote:
| The original post is paradoxical in the very way it talks about
| social media being paradoxical.
|
| He observes that social media moderation is about signal to
| noise. Then he goes on about introducing off-topic noise. Then,
| he comes to conclusions that seem to ignore his original
| conclusion about it being a S/N problem.
|
| Chiefly, he doesn't show how a "council of elders" is necessary
| to solve S/N problems.
|
| Strangely enough, Slashdot seems to have a system which worked
| pretty well back in the day.
| bombcar wrote:
| I think the key is that no moderation can withstand _outside_
| pressure. A community can be entirely consistent and happy
| but the moment outside pressure is applied it folds or falls.
| stcredzero wrote:
| Slashdot moderation is largely done by the users
| themselves, acting anonymously as "meta-moderators." I
| think they were inspired by Plato's ideas around partially
| amnesiac legislators who forget who they are while
| legislating.
| paul7986 wrote:
| Having a a verified public Internet /Reputation ID system for
| those who want to be bad or good publicly is one way!
|
| All others are just trolls not backed up by their verified
| public Internet / Reputation ID.
| P_I_Staker wrote:
| At the very least you could be susceptible overreacting because
| of an emotionally charged issue. Eg. Reddit's boston marathon
| bomber disaster, when they started trying to round up brown
| people (actual perp "looked white")
|
| Maybe that wouldn't be your crusade and maybe you would think
| you were standing up for an oppressed minority. You get overly
| emotional, and you could be prone to making some bad decisions.
|
| People act substantially differently on reddit vs. hackernews;
| honestly I have to admit to being guilty of it. Some of the
| cool heads here are probably simultaneously engaged in
| flamewars on reddit/twitter.
| esotericimpl wrote:
| Charge them $10 to create an account (anonymous, real, parody
| whatever), then if they break a rule give them a warning, 2
| rule breaks, a 24 hour posting suspension, 3 strikes and
| permanently ban the account.
|
| Let them reregister for $10.
|
| Congrats, i just solved spam, bots, assholes and permanent line
| steppers.
| etchalon wrote:
| You solved bots, but destroyed the product.
| lancesells wrote:
| I don't even know if it solved bots. Rich countries, rich
| organizations, rich people could do a lot. $100M would buy
| you 10M bots.
| cvwright wrote:
| I think the idea is that it shifts the incentives. Sure,
| a rich nation state could buy tons of bot accounts at $10
| a pop. But is that still the most rational path to their
| goal? Probably not, because there are lots of other
| things you can do for $100M.
| trynewideas wrote:
| I mean, who here remembers app.net? Love the Garry Tan
| endorsement! https://web.archive.org/web/20120903182620/htt
| ps://join.app....
|
| EDIT: Lol Dalton's PART of YC now. Hey dude, why not pitch
| it then
| DeanWormer wrote:
| This was the strategy at the SomethingAwful forums. They
| seemed pretty well moderated, but definitely never hit the
| scale of Reddit or Twitter.
| v64 wrote:
| Having posted there in its heyday, it made for an
| interesting self-moderation dynamic for sure. Before I
| posted something totally offbase that I knew I'd be
| punished for, I had to think "is saying this stupid shit
| really worth $10 to me?". Many times that was enough to get
| me to pause (but sometimes you also can't help yourself and
| it's well worth the price).
| pfortuny wrote:
| The problem is _the meaning of those rules_. Any rule looks
| reasonable when it is written down. And after some time it
| becomes a weapon.
|
| For instance, the three deletions (forget the exact term)
| rule in wikipedia. It is now a tool used by "the first to
| write"...
| dwater wrote:
| This is how the SomethingAwful forums operated when they
| started charging for accounts. Unfortunately it probably
| wouldn't be useful as a test case because it was/is, at it's
| core, a shitposting site.
| rsync wrote:
| I think metafilter still does this ?
| nebqr wrote:
| And twitter isn't?
| pixl97 wrote:
| Unless you generate more than $10 from the account. For
| example in presidential election years in the US billions is
| spent in advertising the elections. A few PACs would gladly
| throw cash at astroturf movements on social media even at the
| risk of being banned.
| pessimizer wrote:
| Sounds good to me. That would mean that your energy in
| moderation would directly result in income. If superpacs
| are willing to pay $3.33 a message, that's a money-spinner.
| pclmulqdq wrote:
| This kind of thing worked for a few forums that tried it
| before FB/Twitter came around.
| Covzire wrote:
| Give the user exclusive control over what content they can see.
| The platform should enforce legal actions against users only,
| as far as bans are concerned.
|
| Everything else, like being allowed to spam or post too
| quickly, is a bug, and bugs should be addressed in the open.
| visarga wrote:
| > The reality is everyone, myself included, can be and will be
| a bad actor.
|
| Customised filters for anyone, but I am talking about filters
| completely under the control of the user. Maybe running
| locally. We can wrap ourselves in a bubble but better that than
| having a bubble designed by others.
|
| I think AI will make spam irrelevant over the next decade by
| switching from searching and reading to prompting the bot. You
| don't ever need to interface with the filth, you can have your
| polite bot present the results however you please. It can be
| your conversation partner and you get to control its biases as
| well.
|
| Internet <-> AI agent <-> Human
|
| (the web browser of the future, the actual web browser runs in
| a sandbox under the AI)
| swayvil wrote:
| I'll raise you a forum-trained AI spambot to defeat the AI
| spamfilter. It'll be an entirely automated arms race.
| Melatonic wrote:
| Not true at all - everyone has the capacity for bad behaviour
| in the right circumstances but most people are not, in my
| opinion, there intentionally to be trolls.
|
| There are the minority who love to be trolls and get any big
| reaction out of people (positive or negative). Those people are
| the problem. But they are also often very good at evading
| moderation or laying in wait and toeing the line between
| bannable offences and just every so slightly controversial
| comments.
| bnralt wrote:
| Some people are much more likely to engage in bad behavior than
| others. The thing is, people who engage in bad behavior are
| also much more likely to be "whales," excessive turboposters
| who have no life and spend all day on these sites.
|
| Someone who has a balanced life, who spends time at work, with
| family, in nature, only occasionally goes online, uses most of
| their online time for edification, spends 30 minutes writing a
| reply if they decide one is warranted - that type of person is
| going to have a minuscule output compared to the whales. The
| whales are always online, thoughtlessly writing responses and
| upvoting without reading articles or comments. They have a
| constant firehouse of output that dwarfs other users.
|
| Worth reading "Most of What You Read on the Internet is Written
| by Insane People"[1].
|
| If you actually saw these people in real life, chances are
| you'd avoid interacting with them. People seeing a short
| interview with the top mod of antiwork almost destroyed that
| sub (and lead to the mod stepping down). People say the
| internet is a bad place because people act badly when they're
| not face to face. That might be true to some extent, but we're
| given online spaces where it's hard to avoid "bad actors" (or
| people that engage in excessive bad behavior) the same way we
| would in person.
|
| And these sites need the whales, because they rely on a
| constant stream of low quality content to keep people engaged.
| There are simple fixes that could be done, like post limits and
| vote limits, but sites aren't going to implement them. It's
| easier to try to convince people that humanity is naturally
| terrible than to admit they've created an environment that
| enables - and even relies on - some of the most unbalanced
| individuals.
|
| [1]
| https://www.reddit.com/r/slatestarcodex/comments/9rvroo/most...
| ajmurmann wrote:
| It sounds like a insurmountable problem. What makes this even
| more interesting to me is that HN seems to have this working
| pretty well. I wonder how much of it has to do with clear
| guidelines of what should be valued and what shouldn't and
| having a community that buys in to that. For example one learns
| quickly that Reddit-style humor comments are frowned upon
| because the community enforces it with downvotes and frequently
| explanations of etiquette.
| etchalon wrote:
| I suspect HN succeeds due to heavy moderation, explicit
| community guidelines and a narrow topic set.
| blep_ wrote:
| Some areas of reddit do similar things with similar
| results. AskHistorians and AskScience are the first two to
| come to mind.
|
| This may be a lot easier in places where there's an
| explicit _point_ to discussion beyond the discussion itself
| - StackOverflow is another non-Reddit example. It 's easier
| to tell people their behavior is unconstructive when it's
| clearly not contributing to the goal. HN's thing may just
| be to declare a particular type of conversation to be the
| goal.
| vkou wrote:
| HN works very well, because it's about as far from free
| speech as you can get on the internet, short of dang
| personally approving every post.
| swayvil wrote:
| It's proof of the old adage : the best possible government
| is a benign dictator.
| theGnuMe wrote:
| I think most posts are short lived so they drop off quickly
| and people move on to new content. I think a lot of folks
| miss a lot of activity that way. I know I miss a bunch. And
| if you miss the zeitgeist it doesn't matter what you say
| cause nobody will reply.
|
| The twitter retweet constantly amplifies and the tweets are
| centered around an account vs a post.
|
| Reddit should behave similarly but I think subreddit topics
| stick longer.
| luckylion wrote:
| Very good point about the "fog of war". If HN had a
| reply-notification feature, it would probably look
| differently. Every now and then someone builds a
| notification feature as an external service. I wonder if
| you can measure change in the behavior of people before
| and after they've started using it?
|
| Of course, that also soft-forces everyone to move on.
| Once a thread is a day or two old, you can still reply,
| but the person you've replied to will probably not read
| it.
| rjbwork wrote:
| There's also the fact that there's no alerts about people
| replying to you or commenting on your posts. You have to
| explicitly go into your profile, click comments, and
| _then_ you can see _if_ anyone has said anything to you.
|
| This drastically increases time between messages on a
| topic, lets people cool off, and lets a topic naturally
| die down.
| prox wrote:
| What kind of free speech is not allowed? What can't you say
| right now that you feel should?
| ajmurmann wrote:
| Category 1 from Yishan's thread, spam, obviously isn't
| allowed. But also thinking about house general framework
| of it all coming down to signal vs noise, most "noise"
| gets heavily punished on here. Reddit-style jokes
| frequently end in the light greys or even dead. I had my
| account shadow-banned over a decade ago because I made a
| penis joke and thought people didn't get the joke.
| kevin_thibedeau wrote:
| Free speech doesn't mean you can say whatever, wherever,
| without any repercussions. It solely means the government
| can't restrict your expression. On a private platform you
| abide their rules.
| michaelt wrote:
| Where are the people arguing about Donald Trump? Where
| are the people promoting dodgy cryptocurrencies? Where
| are the people arguing about fighting duck-sized horses?
| Where's the Ask HN asking for TV show recommendations?
| slg wrote:
| If we follow the logic of Yishan's thread, HN frowns upon and
| largely doesn't allow discussion that would fall into group 3
| which removes most of the grounds for accusations of
| political and other biases in the moderation. As Yishan says,
| no one really cares about banning groups 1 and 2, so no one
| objects to when that is done here.
|
| Plus scale is a huge factor. Automated moderation can have
| its problems. Human moderation is expensive and hard to keep
| consistent if there are large teams of individuals that can't
| coordinate on everything. HN's size and its lack of desire
| for profit allow for a very small human moderation team that
| leads to consistency because it is always the same people
| making the decisions.
| rsync wrote:
| I have a suspicion that the medium is the message at HN:
|
| No pictures and no avatars.
|
| I wonder how much bad behavior is weeded out by the interface
| itself ?
|
| A lot, I suspect ...
| jdp23 wrote:
| Nope. There's been abuse in text-only environments online
| since forever. And lots of people have left (or rarely post
| on) HN because of complaints about the enviroment here.
| ChainOfFools wrote:
| > No pictures and no avatars
|
| This is essentially moderation rule #0. it is unwritten,
| enforced before violation can occur, and generates zero
| complaints because it filters complainers out of the user
| pool from the start.
| luckylion wrote:
| The no-avatars rule also takes away some of the
| personalization aspect. If you set your account up with
| your nickname, your fancy unique profile picture and your
| favorite quote in the signature, and someone says you're
| wrong, you're much more invested because you've tied some
| of your identity to the account.
|
| If you've just arrived on the site, have been given a
| random name and someone says you're wrong, what do you
| care? You're not attached to that account at all, it's
| not "you", it's just a random account on a random
| website.
|
| I thought that was an interesting point on 4chan (and
| probably other sites before them), that your identity was
| set per thread (iirc they only later introduced the
| ability to have permanent accounts). That removes the
| possibility of you becoming attached to the random name.
| dfxm12 wrote:
| _How do you build and run a "social media" product when the
| very act of letting anyone respond to anyone with anything is
| itself the fundamental problem?_
|
| This isn't the problem as much as giving bad actors tools to
| enhance their reach. Bad actors can pay to get a wider reach or
| get/abuse a mark of authority, like a special tag on their
| handle, getting highlighted in a special place within the app,
| gaming the algorithm that promotes some content, etc. Most of
| these tools are built into the platform. Some though, like sock
| puppets, can be detected but aren't necessarily built in
| functionality.
| bambax wrote:
| You're confusing _bad actors_ with _bad behavior_. Bad behavior
| is something good people do from time to time because they get
| really worked up about a specific topic or two. Bad actors are
| people who act bad all the time. There may be some of those but
| they 're not the majority by far (and yes, sometimes normal
| people turn into bad actors because they get upset about a
| given thing that they can't talk about anything else anymore).
|
| OP's argument is that you can moderate content based on
| behavior, in order to bring the heat down, and the signal to
| noise ratio up. I think it's an interesting point: it's neither
| the tools that need moderating, nor the people, but
| _conversations_ (one by one).
| rlucas wrote:
| ++
|
| A giant amount of social quandaries melt away when you
| realize:
|
| "Good guys" and "Bad guys" is not a matter of identity, it's
| a matter of activity.
|
| You aren't a "Good guy" because of _who you are_ , but
| because of _what you do_.
|
| There are vanishingly few people who as a matter of identity
| are reliably and permanently one way or another.
| dang wrote:
| I think that's right. One benefit this has: if you can make
| the moderation about behavior (I prefer the word effects [1])
| rather than about the person, then you have a chance to
| persuade them to behave differently. Some people, maybe even
| most, adjust their behavior in response to feedback. Over
| time, this can compound into community-level effects (culture
| etc.) - that's the hope, anyhow. I _think_ I 've seen such
| changes on HN but the community/culture changes so slowly
| that one can easily deceive oneself. There's no question it
| happens at the individual user level, at least some of the
| time.
|
| Conversely, if you make the moderation about the person
| (being a bad actor etc.) then the only way they can agree
| with you is by regarding themselves badly. That's a weak
| position for persuasion! It almost compels them to resist
| you.
|
| I try to use depersonalized language for this reason. Instead
| of saying " _you_ " did this (yeah that's right, YOU), I'll
| tell someone that their _account_ is doing something, or that
| their _comment_ is a certain way. This creates distance
| between their account or their comment and _them_ , which
| leaves them freer to be receptive and to change.
|
| Someone will point out or link to cases where I did the exact
| opposite of this, and they'll be right. It's hard to do
| consistently. Our emotional programming points the other way,
| which is what makes this stuff hard and so dependent on self-
| awareness, which is the scarcest thing and not easily added
| to [2].
|
| [1] https://news.ycombinator.com/item?id=33454968
|
| [2] https://news.ycombinator.com/item?id=33448079
| bombcar wrote:
| The other tricky thing is a bad actor will work to stay
| just this side of the rules while still causing damage and
| destruction to the forum itself.
| dang wrote:
| Yes. But in our experience to date, this is less common
| than people say it is, and there are strategies for
| dealing with it. One such strategy is https://hn.algolia.
| com/?sort=byDate&dateRange=all&type=comme... (sorry I
| don't have time to explain this, as I'm about to go
| offline - but the key word is 'primarily'.) No strategy
| works in all cases though.
| jimkleiber wrote:
| > I try to use depersonalized language for this reason.
| Instead of saying "you" did this (yeah that's right, YOU),
| I'll tell someone that their account is doing something, or
| that their comment is a certain way. This creates distance
| between their account or their comment and them, which
| leaves them freer to be receptive and to change.
|
| I feel quite excited to read that you, dang, moderating HN,
| use a similar technique that I use for myself and try to
| teach others. Someone told my good friend the other day
| that he wasn't being a very good friend to me, and I told
| him that he may do things that piss me off, annoy me,
| confuse me, or whatever, but he will always be a good
| friend to me. I once told an Uber driver who told me he
| just got out of jail and was a bad man, I said, "No, you're
| a good man who probably did a bad thing."
|
| Thank you for your write-up.
| user3939382 wrote:
| > persuade the user to behave differently
|
| That scares me. Today's norms are tomorrow's taboos. The
| dangers of conforming and shaping everyone into the least
| controversial opinions and topics are self evident. It's an
| issue on this very forum. "Go elsewhere" doesn't solve the
| problem because that policy still contributes to a self-
| perpetuating feedback loop that amplifies norms, which
| often happen to be corrupt and related to the interests of
| big (corrupt) commercial and political powers.
| dang wrote:
| I don't mean persuade them out of their opinions on
| $topic! I mean persuade them to express their opinions in
| a thoughtful, curious way that doesn't break the site
| guidelines -
| https://news.ycombinator.com/newsguidelines.html.
| user3939382 wrote:
| Sufficiently controversial opinions are flagged,
| downvoted til dead/hidden, or associated users shadow
| banned. HN's policies and voting system, both de facto
| and de jure, discourage controversial opinions and reward
| popular, conformist opinions.
|
| That's not to pick on HN, since this is a common problem.
| Neither do I have a silver bullet solution, but the issue
| remains, and it's a huge issue. Evolution of norms, for
| better or worse, is suppressed to the extent that big
| communication platforms suppress controversy. The whole
| concept of post and comment votes does this by
| definition.
| dang wrote:
| That's true to an extent (and so is what ativzzz says, so
| you're both right). But the reasons for what you're
| talking about are much misunderstood. Yishan does a good
| job of going into some of them in the OP, by the way.
|
| People always reach immediately for the conclusion that
| their controversial-opinion comments are getting
| moderated because people dislike their controversial
| opinion--either because of groupthink in the community or
| because the admins are hostile to their views. Most of
| the time, though, they've larded their comments pre-
| emptively with some sort of hostility, snark, name-
| calling, or other aggression--no doubt because they
| expect to be opposed and want to make it clear they
| already know that, don't care what the sheeple think, and
| so on.
|
| The way the group and/or the admins respond to those
| comments is often a product of those secondary mixins.
| Forgive the gross analogy, but it's as if someone serves
| a shit milkshake and when it's rejected, say, "you just
| hate dairy products" or "this community is so biased
| against milkshakes".
|
| If you start instead from the principle that the value of
| a comment is the expected value of the subthread it forms
| the root of [1], then a commenter is responsible for the
| effects of their comments [2] - at least the predictable
| ones. From that it follows that there's a greater burden
| on the commenter who's expressing a contrarian view [3].
| The more contrarian the view--the further it falls
| outside the community's tolerance--the more
| responsibility that commenter has for not triggering
| degenerative effects like flamewars.
|
| This may be counterintuitive, because we're used to
| thinking in terms of atomic individual responsibility,
| but it's a model that actually works. Threads are
| molecules, not atoms--they're a cocreation, like one of
| those drawing games where each person fills in part of a
| shared picture [4], or like a dance--people respond to
| the other's movements. A good dancer takes the others
| into account.
|
| It may be unfair that the one with a contrarian view is
| more responsible for what happens--especially because
| they're already under greater pressure than the one whose
| views agree with the surround. But fair or not, it's the
| way communication works. If you're trying to deliver
| challenging information to someone, you have to take that
| person into account--you have to regulate what you say by
| what the listener is capable to hear and to tolerate.
| Otherwise you're predictably going to dysregulate them
| and ruin the conversation.
|
| Contrarian commenters usually do the opposite of this--
| they express their contrarian opinion in a deliberately
| aggressive and uncompromising way, probably because (I'm
| repeating myself sorry) they expect to be rejected
| anyhow, and it's safer to be inside the armor of "you
| people can't handle the truth!" than it is to really
| communicate, i.e. to connect and relate.
|
| This model is the last thing that most contrarian-opinion
| commenters want to adopt, because it's hard and risky,
| and because usually they have pre-existing hurt feelings
| from being battered repeatedly with majoritarian opinions
| already (especially the case when identity is at issue,
| such as being from a minority population along whatever
| axis). But it's a model that actually works and it's by
| far the best solution I know of to the problem of
| unconventional opinions in forums.
|
| Are there some opinions which are so far beyond the
| community's tolerance that any mention in any form will
| immediately blow up the thread, making the above model
| impossible? Yes.
|
| [1] https://hn.algolia.com/?dateRange=all&page=0&prefix=t
| rue&sor...
|
| [2] https://hn.algolia.com/?dateRange=all&page=0&prefix=t
| rue&sor...
|
| [3] https://hn.algolia.com/?dateRange=all&page=0&prefix=t
| rue&que...
|
| [4[ https://news.ycombinator.com/item?id=6813226
| ativzzz wrote:
| Completely disagree about HN. Controversial topics that
| are thought out, well formed, and argued with good intent
| are generally good sources of discussion.
|
| Most of the time though, people arguing controversial
| topics phrase them so poorly or include heavy handed
| emotions so that their arguments have no shot of being
| fairly interpreted by anyone else.
| pbhjpbhj wrote:
| You're doing a good job dang.
|
| ... kinda wondering if this is the sort of OT post we're
| supposed to avoid, it would be class if you chastised me
| for it. But anyway, glad you're here to keep us in check
| and steer the community so well.
| dang wrote:
| For stuff like that I go by what pg wrote many years ago:
| https://news.ycombinator.com/newswelcome.html
|
| _Empty comments can be ok if they 're positive. There's
| nothing wrong with submitting a comment saying just
| "Thanks." What we especially discourage are comments that
| are empty and negative--comments that are mere name-
| calling._
|
| It's true that empty positive comments don't add much
| information but they have a different healthy role
| (assuming they aren't promotional)
| ar_lan wrote:
| You definitely hit the nail on the head.
|
| If someone points out a specific action I did that
| can/should be improved upon (and especially if they can
| tell me why it was "bad" in the first place), I'm far more
| likely to accept that, attempt to learn from it, and move
| on. As in real life, I might still be heated in the moment,
| but I'll usually remember that when similar cues strike
| again.
|
| But if moderation hints at something being wrong with my
| identity or just me fundamentally, then that points to
| something that _can't be changed_. If that's the case, I
| _know they are wrong_ and simply won't respect that they
| know how to moderate anything at all, because their
| judgment is objectively incorrect.
|
| Practically at work, this has actually been a good policy
| you described when I think about bugs and code reviews.
|
| > "@ar_lan broke `main` with this CLN. Reverting."
|
| is a pretty sure-fire way to make me defend my change and
| believe you are wrong. My inclination, for better or worse,
| will be to dispute the accusation directly and clear my
| name (probably some irrational fear that creating a bug
| will go on a list of reasons to fire me).
|
| But when I'm approached with:
|
| > "Hey, @ar_lan. It looks like pipeline X failed this test
| after this CLN. We've automatically reverted the commit.
| Could you please take a second look and re-submit with a
| verification of the test passing?"
|
| I'm almost never defensive about it, and I almost always go
| right ahead to reproducing the failure and working on the
| fix.
|
| The first message conveys to me that I (personally) am the
| reason `main` is broken. The second conveys that it was my
| CLN that was problematic, but fixable.
|
| Both messages are taken directly from my companies Slack
| (ommitting some minor details, of course), for reference.
| camgunz wrote:
| I think your moderation has made me better at HN, and
| consequently I'm better in real life. Actively thinking
| about how to better communicate and create environments
| where everyone is getting something positive out of the
| interaction is something I maybe started at HN, and then
| took into the real world. I think community has a lot to do
| with it, like "be the change you want to see".
|
| But to your point, yeah my current company has feedback
| guidelines that are pretty similar: criticize the work, not
| the worker, and it super works. You realize that action
| isn't aligned with who you want to be or think you are, and
| you stop behaving that way. I mean, it's worked on me and
| I've seen it work on others, for sure.
| mypalmike wrote:
| I can "behave well" and still be a bad actor in that I'm
| constantly spreading dangerous disinformation. That
| disinformation looks like signal by any metadata analysis.
| bambax wrote:
| Yes, that's probably the limit of the pure behavioral
| analysis, esp. if one is sincere. If they're insincere it
| will probably look like spam; but if somebody truly
| believes crazy theories and is casually pushing them (vs
| promoting them aggressively and exclusively), that's
| probably harder to spot.
| afiori wrote:
| I think you agree with the parent.
|
| They pointed out that everybody can be a bad actor and you
| will not find a way to get better users.
| whoopdedo wrote:
| And bad behavior gets rewarded with engagement. We learned
| this from "reality television" where the more conflict there
| was among a group of people the more popular that show was.
| (Leading to producers abandoning the purity of being
| unscripted in the pursuit of better ratings.) A popular
| pastime on Reddit is posting someone behaving badly (whether
| on another site, a subreddit, or in a live video) for the
| purpose of mocking them.
|
| When the organizational goal is to increase engagement, which
| will be the case wherever there are advertisers, inevitably
| bad behavior will grow more frequent than good behavior.
| Attempts to moderate toward good behavior will be abandoned
| in favor of better metrics. Or the site will stagnate under
| the weight of the new rules.
|
| In this I'm in disagreement with Yishan because in those
| posts I read that engagement feedback is a characteristic of
| old media (newspapers, television) and social media tries to
| avoid that. The OP seems to be saying that online moderation
| is an attempt to minimize controversial engagement because
| platforms don't like that. I don't believe it. I think social
| media loves controversial engagement just as much as the old-
| school "if it bleeds, it leads" journalists from television
| and newspapers. What they don't want is the (quote/unquote)
| wrong kind of controversies. Which is to say, what defines
| bad behavior is not universally agreed upon. The threshold
| for what constitutes bad behavior will be different depending
| on who's doing the moderating. As a result the content seen
| will be influenced by the moderation, even if said moderation
| is being done in a content-neutral way.
|
| And I just now realize that I've taken a long trip around to
| come to the conclusion that the medium is the message. I
| guess we can now say the moderation is the message.
| jonny_eh wrote:
| > Bad actors are people who act bad all the time
|
| I'd argue that bad actors are people that behave badly "on
| purpose". Their _goals_ are different than the normal actor.
| Bad actors want to upset or scare people. Normal actors want
| to connect with, learn from, or persuade others.
| paradite wrote:
| I recently started my own Discord server and had my first
| experience in content moderation. The demographics is mostly
| teenagers. Some have mental health issues.
|
| It was the hardest thing ever.
|
| In first incident I chose to ignore a certain user being targeted
| by others for posting repeated messages. The person left a very
| angry message and left.
|
| Comes the second incident, I thought I learnt my lesson. Once a
| user is targeted, I tried to stop others from targeting the
| person. But this time the people who targeted the person wrote
| angry messages and left.
|
| Someone asked a dumb question, I replied in good faith. The
| conversation goes on and on and becomes weirder and weirder,
| until the person said "You shouldn't have replied me.", and left.
|
| Honestly I am just counting on luck at this time that I can keep
| it running.
| derefr wrote:
| > In first incident I chose to ignore a certain user being
| targeted by others for posting repeated messages. The person
| left a very angry message and left.
|
| > Comes the second incident, I thought I learnt my lesson. Once
| a user is targeted, I tried to stop others from targeting the
| person. But this time the people who targeted the person wrote
| angry messages and left.
|
| Makes me think that moderators should have the arbitrational
| power to take two people or groups, and (explicitly, with
| notice to both people/groups) make each person/group's public
| posts invisible to the other person/group. Like a cross between
| the old Usenet ignore lists, and restraining orders, but
| externally-imposed without either party actively seeking it
| out.
| watwut wrote:
| Imo, some people leaving is not necessary bad thing. Like, some
| people are looking for someone to bully. Either you allow them
| bully or they leave. The choice determines overall culture of
| you community.
|
| And sometimes people are looking for a fight and will search it
| until they find it ... and then leave.
| Goronmon wrote:
| _And sometimes people are looking for a fight and will search
| it until they find it ... and then leave._
|
| I've found the more likely result is that people looking for
| a fight will find it, and then stay because they've found a
| target and an audience. Even if the audience is against them
| (and especially so if moderators are against them), for some
| people that just feeds their needs even more.
| thepasswordis wrote:
| How old are you? An adult running a discord server for mentally
| ill teenagers seems like a cautionary tale from the 1990s about
| chatrooms.
| paradite wrote:
| I'm afraid I'm too young to understand that reference or
| context around chatrooms.
|
| Anyway, the Discord server is purely for business and
| professional purposes. And I use the same username everywhere
| including Discord, so it's pretty easy to verify my identity.
| Tenal wrote:
| drekipus wrote:
| Its in vogue today.
| skissane wrote:
| > An adult running a discord server for mentally ill
| teenagers seems like a cautionary tale from the 1990s about
| chatrooms
|
| It sounds like a potential setup for exploitation, grooming,
| cult recruitment, etc. (Not saying the grandparent is doing
| this, for all I know their intentions are entirely above
| board-but other people out there likely are doing it for
| these kinds of reasons.)
| [deleted]
| peruvian wrote:
| Discord is already considered a groomer hotspot, at least
| in joking. You can join servers based on interests alone
| and find yourself in a server with very young people.
| TulliusCicero wrote:
| I doubt it's explicitly for mentally ill teenagers. It could
| be, say, a video game discord, and so the demographics are
| mostly teens who play the game, and obviously some subset
| will be mentally ill.
| strken wrote:
| It's probably something like this. I'm interested in a
| specific videogame and have bounced around a lot of
| discords trying to find one where most of the members are
| older. We still have some under-18s (including one guy's
| son), but they're in the minority, and that makes
| everything easier to moderate. We can just ban (or temp-
| ban) anyone who's bringing the vibe down and know that the
| rest will understand and keep the peace.
|
| Teens don't have as much experience with communities going
| to shit, or with spaces like the workplace where you're
| collectively responsible for the smooth running of the
| group. They're hot-headed and can cause one bad experience
| to snowball where an adult might forgive and forget.
|
| About the only thing that makes mentally healthy adults
| hard to moderate is when they get drunk or high and do
| stupid stuff because they've stopped worrying about
| consequences.
| MichaelCollins wrote:
| > _Teens don 't have as much experience with communities
| going to shit, or with spaces like the workplace where
| you're collectively responsible for the smooth running of
| the group. They're hot-headed and can cause one bad
| experience to snowball where an adult might forgive and
| forget._
|
| Some people, not just teens of course, feel utterly
| compelled to go tit-for-tat, to retaliate in kind. Even
| if you can get them to cool down and back off for a
| while, and have a civil conversation with you about
| disengaging, they may tell you that they're going to
| retaliate against the other person anyway at a later
| date, in cold blooded revenge, because they _have to_.
| That necessity seems to be an inescapable reality for
| such people. They feel they have _no choice_ but to
| retaliate.
|
| When two such people encounter each other and an accident
| is mispercieved as an offense, what follows is
| essentially a blood feud. An unbreakable cycle of
| retaliation after retaliation. Even if you can get to the
| bottom of the original conflict, they'll continue
| retaliating against each other for the later acts of
| retaliation. The only way to stop it is to ban one if not
| both of them. Moderation sucks, never let somebody talk
| you into it.
| pr0zac wrote:
| My interpretation was he ran a discord server for a topic
| who's demographics happened to include a large number of
| teenagers and folks with mental illness thus unintentionally
| resulting in a discord containing a lot of them, not that he
| was specifically running a discord server targeting mentally
| ill teens.
| bmitc wrote:
| I think all this just revolves around humans being generally
| insane and emotionally unstable. Technology just taps into
| this, exposes it, and connects it to others.
| themitigating wrote:
| Wow, and now we all learned that nothing should be censored
| thanks to this definitely real situation where the same outcome
| occurred when you censored both the victim and perpetrator
| DoItToMe81 wrote:
| Mental illness or not, your interactions with users in a
| service with a block button are all voluntary. Unless someone
| is going out of their own way to drag drama out of Discord, or
| god forbid, into real life, it tends to be best to just let it
| happen, as they are entirely willingly participating in it and
| the escape is just a button away.
| watwut wrote:
| Community defined by the most aggressive people that come in
| tend to be the one where everyone else voluntarily leaves,
| cause leaving is much better for them.
| TulliusCicero wrote:
| I see this a fair amount, and yeah, "just let people block
| others" is really terrible moderation advice.
|
| Besides the very reasonable expectation almost everyone has
| that assholes will be banned, the inevitable result of not
| banning assholes is that you get more and more assholes,
| because their behavior will chase away regular users. Even
| some regular users may start acting more like assholes,
| because what do you do when someone is super combative, aside
| from possibly leaving? You become combative right back, to
| fight back.
| beezlebroxxxxxx wrote:
| IME, places (or forums, or social networks, etc.) with good
| moderation tend to fall into 2 camps of putting that into play:
|
| 1. The very hands-off approach style that relies on the subject
| matter of the discussion/topic of interest naturally weeding
| out "normies" and "trolls" with moderation happening "behind
| the curtain";
|
| 2. The very hands-on approach that relies on explicit clear
| rules and no qualms about acting on those rules, so moderation
| actions are referred directly back to the specific rule broken
| and in plain sight.
|
| Camp 1 begins to degrade as more people use your venue; camp 2
| degrades as the venue turns over to debate about the rules
| themselves rather than the topic of interest that was the whole
| point of the venue itself (for example, this is very common in
| a number of subreddits where break-off subreddits usually form
| in direct response to a certain rule or the enforcement of a
| particular rule).
| derefr wrote:
| Camp 2 works fine in perpetuity _if_ the community is built
| as a cult of personality around a central authority figure;
| where the authority figure is also the moderator (or, if
| there are other moderators, their authority is delegated to
| them by the authority figure, and they can always refer
| arbitration back to the authority figure); where the clear
| rules are understood to be _descriptive_ of the authority 's
| decision-tree, rather than _prescriptive_ of it -- i.e.
| "this is how I make a decision; if I make a decision that
| doesn't cleanly fit this workflow, I won't be constrained by
| the workflow, but I will try to change the workflow such that
| it has a case for what I decided."
| wwweston wrote:
| Is people leaving and founding a different forum with
| different rules really a failure/degradation?
| cloverich wrote:
| It would be cool if such forks were transparent on the
| original forum / subreddit, and if they also forked on
| specific rules. I.e. like a diff with rule 5 crossed out /
| changed / new rule added, etc.
| wutbrodo wrote:
| I've seen an example of this. The fork is less active
| than the original, but I wouldn't call it a failure.
| Rather, it was a successful experiment with a negative
| result. The original forum was the most high-quality
| discussion forum I've ever experienced in my life, so
| this wasn't quite a generalizable experiment.
| krippe wrote:
| Someone asked a dumb question, I replied in good faith. The
| conversation goes on and on and becomes weirder and weirder,
| until the person said "You shouldn't have replied me.", and
| left.
|
| Haha wtf, why would they do that?
| TulliusCicero wrote:
| I'm confused, do you think some individual leaving is a failure
| state? Realistically I don't think you can avoid banning or
| pissing some people off as a moderator, at least in most cases.
|
| There's a lot of people whose behavior on internet message
| boards/chat groups can be succinctly summarized as, "they're an
| asshole." Now maybe IRL they're a perfectly fine person, but
| for whatever reason they just engage like an disingenuous jerk
| on the internet, and the latter case is what's relevant to you
| as a moderator. In some cases a warning or talking-to will
| suffice for people to change how they engage, but often times
| it won't, they're just dead set on some toxic behavior.
| shepherdjerred wrote:
| > I'm confused, do you think some individual leaving is a
| failure state?
|
| When you are trying to grow something, them leaving is a
| failure.
|
| I ran a Minecraft server for many years when I was in high
| school. It's very hard to strike a balance of:
|
| 1. Having players
|
| 2. Giving those players a positive experience (banning
| abusers)
|
| 3. Stepping in only when necessary
|
| Every player that I banned meant I lost some of my player
| base. Some players in particular would cause an entire group
| to leave. Of course, plenty of players have alternate
| accounts and would just log onto one of those.
| TulliusCicero wrote:
| I think it _can_ be a failure state, certainly, but
| sometimes it 's unavoidable, and banning someone can also
| mean more people in the community, rather than less.
|
| Would HN be bigger if it had always had looser moderation
| that involved less banning of people? I'm guessing not.
|
| edit: I guess what I was thinking was that often in a
| community conflict where one party is 'targeted' by another
| party, banning one of those parties is inevitable. Not
| always, but often people just cannot be turned away from
| doing some toxic thing, they feel that they're justified in
| some way and would rather leave/get banned than stop.
| whatshisface wrote:
| The person leaving is the least bad part of what happened in
| the OP's example, try reading this again?:
|
| > _In first incident I chose to ignore a certain user being
| targeted by others for posting repeated messages. The person
| left a very angry message and left._
| TulliusCicero wrote:
| They have three examples, and all of them ended with the
| person leaving; it just sounded to me like they were
| implying that the person leaving represented a failure on
| their part as a moderator. That, had they moderated better,
| they could've prevented people leaving.
| whatshisface wrote:
| Each of the examples had something bad happen in the
| lead-up to the person leaving.
| TulliusCicero wrote:
| Yes, and? I honestly can't tell what you're getting at
| here.
| whatshisface wrote:
| That the bad thing they were talking about was the bad
| stuff leading up to the person leaving.
| TulliusCicero wrote:
| That was bad yes, but it sounds like they feel that the
| outcome each time of someone leaving (and possibly with
| an angry message) was also bad, and indicative that they
| handled the situation incorrectly.
| lovehashbrowns wrote:
| Discord is particularly tough, depending on the type of
| community. I very briefly moderated a smaller community for a
| video game, and goodness was that awful. There was some
| exceptionally egregious behavior, which ultimately made me
| quit, but even things like small cliques. Any action, perceived
| or otherwise, taken against a "popular" member of that clique
| would immediately cause chaos as people would begin taking
| sides and forming even stronger cliques.
|
| One of the exceptionally egregious things that made me quit
| happened in a voice call where someone was screensharing
| something deplorable (sexually explicit content with someone
| that wasn't consenting to the screensharing). I wouldn't have
| even known it happened except that someone in the voice call
| wasn't using their microphone, so I was able to piece together
| what happened from them typing in the voice chat text channel.
| I can't imagine the horror of moderating a larger community
| where various voice calls are happening at all times of the
| day.
| ChainOfFools wrote:
| flamebait directed at specific groups: cliquebait
|
| /s
| [deleted]
| ojosilva wrote:
| There are so many tangible vectors in content! It makes me feel
| like moderation is a doable, albeit hard to automate, task:
|
| - substantiated / unsubstantiated - extreme / moderate -
| controversial / anodyne - fact / fun / fiction - legal / unlawful
| - mainstream / niche - commercial / free - individual /
| collective - safe / unsafe - science / belief - vicious / humane
| - blunt / tactful - etc. etc.
|
| Maybe I'm too techno-utopic, but can't we model AI to detect how
| these vectors combine to configure moderation?
|
| Ex: Ten years ago masks were _niche_ , therefore
| _unsubstantiated_ news on the drawbacks of wearing masks were
| still considered _safe_ because very few people were paying
| attention and /or could harm themselves, so that it was not
| _controversial_ and did not require moderation. Post-covid, the
| vector values changed, questionable content about masks could be
| flagged for moderation with some intensity indexes, user-
| discretion-advised messages and /or links to rebuttals if
| applicable.
|
| Let the model and results be transparent and reviewable, and,
| most important, editorial. I think the greatest mistake of
| moderated social networks is that many people (and the network
| themselves) think that these internet businesses are not
| "editorial", but they are not very different from regular news
| sources when it comes to editorial lines.
| raxxorraxor wrote:
| Not a good idea. Your example already has flaws. An AI could
| perform on a larger scale, but the result would be worse.
| Probably far worse.
|
| I specifically don't want any editor for online content. Just
| don't make it boring or worse turn everything into
| astroturfing. Masks are a good example already.
| pixl97 wrote:
| >Maybe I'm too techno-utopic,
|
| https://xkcd.com/1425/
|
| I personally believe this won't be a solvable problem, or at
| least the problem will grow a long tail. One example would be
| hate groups co-opting the language of the victim group in an
| intentional manner. Then as the hate group is moderated for
| their behaviors, the victim group is caught up in the action by
| intentional user reporting for similar language.
|
| It's a difficult problem to deal with as at least some portion
| of your userbase will be adversarial and use external signaling
| and crowd sourced information to cause issues with your
| moderation system.
| fleddr wrote:
| In the real world, when you're unhinged, annoying,
| intrusive...you face almost immediate negative consequences. On
| social media, you're rewarded with engagement. Social media
| owners "moderate" behavior that maximizes the engagement they
| depend on, which makes it somewhat of a paradox.
|
| It would be similar to a newspaper "moderating" their journalists
| to bring news that is balanced, accurate, fact-checked, as
| neutral as possible, with no bias to the positive or negative.
| This wouldn't sell any actual news papers.
|
| Similarly, nobody would watch a movie where the characters are
| perfectly happy. Even cartoons need villains.
|
| All these types of media have exploited our psychological draw to
| the unusual, which is typically the negative. This attention hack
| is a skill evolved to survive, but now triggered all day long for
| clicks.
|
| Can't be solved? More like unwilling to solve. Allow me to clean
| up Twitter:
|
| - Close the API for posting replies. You can have your weather
| bot post updates to your weather account, but you shouldn't be
| able to instant-post a reply to another account's tweet.
|
| - Remove the retweet and quote tweet buttons. This is how things
| escalate. If you think that's too radical, there's plenty of
| variations: a cap on retweets per day. A dampening of how often a
| tweet can be retweeted in a period of time to slow the network
| effect.
|
| - Put a cap on max tweets per day.
|
| - When you go into a polarized thread and rapidly like a hundred
| replies that are on your "side", you are part of the problem and
| don't know how to use the like button. Hence, a cap on max likes
| per day or max likes per thread. So that they become quality
| likes that require thought. Alternatively, make shadow-likes.
| Likes that don't do anything.
|
| - When you're a small account spamming low effort replies and the
| same damn memes on big accounts, you're hitchhiking. You should
| be shadow-banned for that specific big account only. People would
| stop seeing your replies only in that context.
|
| - Mob culling. When an account or tweet is mass reported in a
| short time frame and it turns out that it was well within
| guidelines, punish every single user making those reports. Strong
| warning, after repeated abuse a full ban or taking away the
| ability to report.
|
| - DM culling. It's not normal for an account to suddenly receive
| hundreds or thousands of DMs. Where a pile-on in replies can be
| cruel, a pile-on in DMs is almost always harassment. Quite a few
| people are OK with it if only the target is your (political)
| enemy, but we should reject it by principle. People joining such
| campaigns aren't good people, they are sadists. Hence they should
| be flagged as potentially harmful. The moderation action here is
| not straightforward, but surely something can be done.
|
| - Influencer moderation. Every time period, comb through new
| influencers manually, for example those breaking 100K followers.
| For each, inspect how they came to power. Valuable, widely loved
| content? Or toxic engagement games? If it's the latter, dampen
| the effect, tune the alghoritm, etc.
|
| - Topic spam. Twitter has "topics", great way to engage in a
| niche. But they're all engagement farmed. Go through these topics
| manually every once in a while and use human judgement to tackle
| the worst offenders (and behaviors)
|
| - Allow for negative feedback (dislike) but with a cap. In case
| of a dislike mob, take away their ability to dislike or cap it.
|
| Note how none of these potential measures address what it is that
| you said, it addresses behavior: the very obvious misuse/abuse of
| the system. In that sense I agree with the author. Also, it
| doesn't require AI. The patterns are incredibly obvious.
|
| All of this said, the above would probably make Twitter quite an
| empty place. Because escalated outrage is the product.
| lm28469 wrote:
| Reading these threads on twitter is like listening to a friend
| having a bad mdma trip replaying his whole emotional life to you
| in a semi incoherent diarrhea like stream of thoughts
|
| Please write a book, or at the very least an article... posting
| on twitter is like writing something on a piece of paper, showing
| it to your best friend and worst enemy before throwing it in the
| trash
| Canada wrote:
| If only there was some site that was good for posting longer
| text, with a really good comment system to discuss it...
| mbesto wrote:
| And hilariously he starts with "How do you solve the content
| moderation problems on Twitter?" and never actually answer it.
| Just rambles on about a dissection of the problem. Guess we
| know now why content moderation was never "solved" at Reddit,
| nor will it ever be.
| ilyt wrote:
| He kinda did in roundabout way; the "perfect" moderation,
| even if possible, will turn it into nice and cultured place
| to have discussion and _that doesn 't bring controversy and
| sell ads_.
|
| You would have way less media "journalists" making a fuss
| about what someone said on that social network and would have
| problems just getting it to be popular, let alone displace
| any of the big ones. It would maybe be possible with existing
| one but that's a ton of work and someone needs to pay for
| that work.
|
| And it's entirely possible for smaller community to have
| that, but the advantage with this is small community about X
| will also have moderators that care about X so
|
| * any on-topic bollocks can be spotted by mods and it is no
| longer "unknown language"
|
| * any off-topic bollocks can be just dismissed with "this is
| a forum about X, if you don't like it go somewhere else
| mbesto wrote:
| > the "perfect" moderation, even if possible, will turn it
| into nice and cultured place to have discussion and that
| doesn't bring controversy and sell ads.
|
| That's not a solution though since every for profit
| business is generally seeking to maximize profit, and
| furthermore we already knew this to be the case - nothing
| he is saying is novel. I guess that's where I'm confused.
| drewbeck wrote:
| There's a study to be done on the polarization around twitter
| threads. I have zero problem with them and find overall that
| lots of great ideas are posted in threads, and the best folks
| doing it end up with super cogent and well written pieces. I
| find it baffling how many folks are triggered by them and
| really hate them!
| rchaud wrote:
| This is likely because threads are a "high engagement" signal
| for Twitter and therefore prone to being gamed.
|
| There are courses teaching people how to game the Twitter
| algo. One of those took off significantly in the past 18
| months. You can tell by the number of amateurs creating
| threads on topics far beyond their reach. The purpose of
| these threads is for it to show up on people's feeds under
| the "Topic" section.
|
| For example, I often see see random posts from "topics"
| Twitter thinks I like (webdev, UI/UX, cats, old newspaper
| headlines). I had to unsubscribe from 'webdev' and "UI/UX"
| because the recommended posts were all growth hackers. It
| wasn't always that way.
|
| I'm not the only one, others have commented on it as well,
| including a well known JS developer:
|
| https://twitter.com/wesbos/status/1587071684539973633
| drewbeck wrote:
| > This is likely because threads are a "high engagement"
| signal for Twitter and therefore prone to being gamed.
|
| You mean this is the reason folks respond differently to
| the form of twitter thread? This is one that is definitely
| not from a growth hacker but folks here still seem to hate
| it.
| peruvian wrote:
| Thing is no one's going to read a blog post that he would've
| linked to. As bad as they are, Twitter threads guarantee a way
| larger audience.
| pc86 wrote:
| These things seem to be fine when it's 5-6 tweets in a coherent
| thread. There's even that guy who regularly multi-thousand-word
| threads that are almost always a good read.
|
| This thread in particular is really bad.
| heed wrote:
| What got me was him weaving in (2-3 times) self promotion
| tweets of some tree planting company he funds/founded(?). He
| basically personally embedded ads into his thread, which is
| actually kind of smart I suppose, but very confusing as a
| reader.
| pjc50 wrote:
| Kind of genius to put it in the middle. Most normal people
| write a tweet that blows up and then have to append "Check
| out my soundcloud!" on the end. Or an advert for the nightsky
| lamp.
| rchaud wrote:
| That was the middle? I stopped reading once it got to
| 'here's my exciting new gig about trees' bit.
| adharmad wrote:
| There is one at the end too (if you reach that far)
| shilling for another company where he is an investor -
| Block Party.
| mikeryan wrote:
| I didn't even know he circled around back to the topic. I
| split when I got to "TREES!" and wondering "that's it?"
|
| After this comment I went back to read the rest.
| toss1 wrote:
| At the same time (as much as I strongly support climate
| efforts, and am impressed by his approach, so give him a pass
| in this instance), that 'genius move' sort of needs to be
| flagged as his [Category #1 - Spam], which should be
| moderated. It really is inserting off-topic info into another
| thread.
|
| The saving grace may be that both small enough volume and
| sufficiently interesting to his audience to be just below the
| threshold.
| slowmovintarget wrote:
| Was he perhaps trying for a Q.E.D. there?
| 2OEH8eoCRo0 wrote:
| https://news.ycombinator.com/newsguidelines.html
|
| > Please don't complain about tangential annoyances--e.g.
| article or website formats, name collisions, or back-button
| breakage. They're too common to be interesting.
| IshKebab wrote:
| It's not really a tangential annoyance. I literally couldn't
| read the post because of the insane format.
| miiiiiike wrote:
| It was a massive stream of tweets, with two long digressions,
| and several embeds. The only thing that would have made it
| worse is if every tweet faded in on scroll.
|
| If we're going to pedantically point out rules, why don't we
| add one that says "No unrolled Twitter threads."?
| Karunamon wrote:
| It is not pedantic, it is people derailing possibly
| interesting discussion of the content with completely off-
| topic noise discussion of the presentation. If you do not
| like the presentation there are ways to change it.
| drewbeck wrote:
| If we're going to pedantically point out rules why don't we
| cook hamburgers on the roof of parliament? Or something
| else that isn't pedantically point out rules?
| Karawebnetwork wrote:
| Imagine it is a text and you can mark any paragraph. You can
| save that paragraph, like it, or even reply to it. So the
| interaction can grow like tentacles (edit: or rather like a
| tree).
|
| Right now, I could make a comment on either your first or
| second paragraph, or on your entire comment. However, there is
| no way to determine which category my reply falls into until
| you have read it entirely. On a platform like Twitter, where
| there can be up to 100,000 comments on a given piece of
| content, this is very useful.
|
| Better yet, it allows the author himself to dig down into
| tangent. In theory, someone could create an account and then
| have all of their interactions stay on the same tree without
| ever cutting off. Essentially turning their account into an
| interconnected "wiki" where everyone can add information.
|
| With enough time your brain no longer registers the metadata
| around the tweet. If you ignore it and read it as an entire
| text it is not very different from a regular article or long
| form comment:
| https://threadreaderapp.com/thread/1586955288061452289.html
| dingaling wrote:
| I am imaging a normal long-form blog format but with comments
| collapsed after each paragraph as a compromise between the
| two current options.
| rakoo wrote:
| > Right now, I could make a comment on either your first or
| second paragraph, or on your entire comment. However, there
| is no way to determine which category my reply falls into
| until you have read it entirely
|
| That is _exactly_ what quoting does, and is older than the
| web itself.
| ilyt wrote:
| The less inept sites also allow you to just select text and
| click reply to get that quote and your cursor set below,
| ready for reply
| rcarr wrote:
| This is brilliant, I had never thought about it like this
| before. I'd maybe say grow like a tree rather than tentacles
| although you might have a point in that if you're speaking
| with the wrong person it could be pretty cthulonic.
| lm28469 wrote:
| What you're saying is that we should optimise the way we
| debate things to please the algorithm and maximise user
| engagement instead of maximising quality content and
| encouraging deep reflexions
|
| The truth is people can't focus for more than 15 seconds so
| instead of reading a well researched and deep article or book
| that might offer sources, nuances, &c. they'll click "like"
| and "retweet" whoever vomited something that remotely goes
| their way while ignoring the rest
|
| > If you ignore it and read it as an entire text it is not
| very different from a regular article or long form comment
|
| It is extremely different as each piece is written as a
| independent 10s thought ready to be consumed and retweeted.
| Reading it on threadreaderapp makes it even more obvious,
| your brain need to work 300% harder to process the semi
| incoherent flow, some blogs written by 15 years old are more
| coherent and pleasant to read than this
|
| btw this is what I see on your link, more ads:
| https://i.imgur.com/rhaXStj.png
| Karawebnetwork wrote:
| > What you're saying is that we should optimise the way we
| debate things to please the algorithm and maximise user
| engagement instead of maximising quality content and
| encouraging deep reflexions
|
| Not at all, in my opinion being able to interact with every
| piece of an exchange allows to dig down into specific
| points of a debate.
|
| There is a soft stop at the end of every tweet because it's
| a conversation and not a presentation. It's an interactive
| piece of information and not a printed newspaper. You can
| interact during the thread and it might change its outcome.
|
| When you are the person interacting, it's similar to a real
| life conversation. You can cut someone and talk about
| something else at any time. The focus conversation will
| shift for a short moment and then come back to the main
| topic.
|
| For someone arriving after the fact, you have a time
| machine of the entire conversation.
|
| ---
|
| About the link, it is only the first result on Google
| because I don't use those services and not me vetting for
| this specific one. I also use ad blockers at all levels
| (from pi-hole to browser extension to VPN level blocking),
| so I don't see ads online.
|
| If I go meta for a second, this is the perfect example of
| how breaking ideas into different tweets can be useful.
|
| Were I to share your comment on its own, it contains that
| information about a link that is not useful to anyone but
| you and I.
|
| For someone reading our comments, they have to go through
| this interaction on the ads and this product. If instead
| this were two tweets it would have allowed us to comment on
| this in parallel. If it was HN, imagine if you had made two
| replies under my comments and we could have commented under
| each. However, that's the wrong way on this platform.
| ilyt wrote:
| > Right now, I could make a comment on either your first or
| second paragraph, or on your entire comment. However, there
| is no way to determine which category my reply falls into
| until you have read it entirely. On a platform like Twitter,
| where there can be up to 100,000 comments on a given piece of
| content, this is very useful.
|
| Oh, look, I have managed to reply to your second paragraph
| without having to use twatter, how quaint!
| Karawebnetwork wrote:
| There would be a lot of noise if everyone left 5 comments
| under every comments. This is not the way HN is built.
| Commenting too quickly even blocks you from interacting.
| polytely wrote:
| I've never understood this, it's just reading: you start at the
| beginning of tweet, you read it, then go to the next tweet and
| read it. How is that different from reading paragraphs?
| lm28469 wrote:
| idk man
|
| Maybe
|
| we could put more
|
| Words in a single
|
| Please visit my new website and subscribe to my podcast
|
| Line so it would be
|
| More readable
|
| random user comment: yes you're right mark that would be
| better
|
| and much more user friendly
|
| _ insert latest Musk schizophrenic rant_
|
| Ah and I forgot the "sign up to read the rest" pop up that
| seemingly appears at random interval
| mikkergp wrote:
| This is great! It's like modern poetry. I think the
| research suggests less words per line make it faster
| reading too.
| adharmad wrote:
| Like re-inventing Bukowski's writing style with terrible UX
| polytely wrote:
| are you reading on the computer? maybe thats the
| disconnect. for me each tweet has around 4 lines of text.
|
| so it reads more like the post you are reading now. where
| each tweet is a decent chunk of text.
|
| making the reading experience only marginally worse than
| reading something on hacker news.
| aniforprez wrote:
| The amount of UI noise around each tweet and how much you
| have to scroll, coupled with the need to trigger new
| loads once Twitter has truncated the number of replies
| and also HOW MUCH YOU HAVE TO SCROLL makes this a
| _terrible_ experience
|
| I understand why people tweet rather than write blogs.
| Twitter gives more visibility and is a far lower barrier
| of entry than sitting down and writing an article or a
| blog. That Twitter hasn't solved this problem after years
| of people making long threads and this being a big way
| that people consume content on the platform is a failure
| on their part. Things like ThreadReader should be in-
| built and much easier to use. I think they acquired one
| of these thread reader apps too
| j33zusjuice wrote:
| abetusk wrote:
| I think this is important enough to highlight. Tweets are
| very different from other forms of communication on the
| internet. You can see it even here on HN in the comments
| section.
|
| Twitter railroads the discussion into a particular type by
| the form of discourse. Each tweet, whether it's meant to or
| not, is more akin to a self contained atomic statement then a
| paragraph relating to a whole. This steers tweets into short
| statements of opinion masquerading as humble, genuine
| statements of fact. Often times each tweet is a simple idea
| that's given more weight because it's presented in tweet
| form. An extreme example is the joke thread of listing out
| each letter of the alphabet [0] [1].
|
| When tweets are responding to another tweet, it comes off as
| one of the two extreme choices of being a shallow affirmation
| or a combative "hot take".
|
| Compare this with the comments section here. Responses are,
| for the most part, respectful. Comments tend to address
| multiple points at once, often interweaving them together.
| When text is quoted, it's not meant as a hot take but a
| refresher on the specific point that they're addressing.
|
| The HN comments section has its problems but, to me, it's
| night and day from Twitter.
|
| I basically completely avoid responding to most everything on
| Twitter for this reason. Anything other than a superficial
| "good job" or "wow" is taken as a challenge and usually gets
| a nasty response. I also have to actively ignore many tweets,
| even from people I like and respect, because the format over
| emphasizes trivial observations or opinions.
|
| [0]
| https://twitter.com/dancerghoul/status/1327361236686811143
|
| [1]
| https://twitter.com/ChaikaGaming/status/1270330453053132800
| P_I_Staker wrote:
| You gotta understand their angle...
|
| ... in the early days of the internet ...
|
| ... comments could be very, very long; the user was given a
| virtual unbounded battleground to fight their ideological
| battles ...
|
| ... The public, the rabble, couldn't stop.. The words kept
| coming; a torrent of consonants and vowels descending upon
| our eye ba ... (limit exceeded)
|
| ... lls like an avalanche of ideas ...
|
| ... it was too much and twitter was borne ...
|
| ... the people keep their ideas small, like their tiny
| brains, and non-existent attention spans ...
|
| P.S. I was gonna write this as a comment chain, but
| HackerNews, in all their wisdom, limits self-replies to only
| one
| [deleted]
| rchaud wrote:
| If you want to talk diarrhea, look no further than those "save
| to Readwise / Notion / Pocket" comments that pollute most of
| these long threads.
| rideontime wrote:
| But the twitter thread makes it much easier to pivot into
| talking about his latest startup that's totally unrelated!
| dariusj18 wrote:
| It seems apt that the most engaged comment in the thread is a
| meta comment which derails any conversation about the content
| of the post itself.
| rakoo wrote:
| The post says that moderation is first and foremost a signal-
| to-noise curation. Writing long form content in a Twitter
| thread greatly reduces the signal-to-noise ratio.
| hypertele-Xii wrote:
| The medium is the message.
| dariusj18 wrote:
| But also an example of how moderation or lack therein would
| help to serve a particular end goal. ex. HackerNews is a
| pretty well moderated forum, however sometimes the content
| (PC being related to technology) is within the rules, but
| the behavior it elicited in the other users is detrimental
| to the overall experience.
| bongobingo1 wrote:
| nitter.net has a vaguely more readable view, not good but
| better.
|
| https://nitter.net/yishan/status/1586955288061452289
| dynm wrote:
| Just realized there are extensions that will auto-redirect
| all twitter links to nitter. Why didn't I do this year ago!?
| foobarbecue wrote:
| https://threadreaderapp.com/thread/1586955288061452289.html
|
| Agreed, though, Twitter threads are a really poor
| communications medium.
| madsmith wrote:
| This thread was a delightful read.
|
| Just NOT in twitter. I gave up on twitter and signed out of
| it years ago and refuse to sign back in.
|
| I spent a good hour of my life looking for ways to read this
| thread. I personally know Yishan and value the opinions he
| cares to share so I new this would be interesting if I could
| just manage to read it.
|
| Replacing the url to nitter.net helped but honestly it was
| most cohesive in threadreaderapp although it missed some of
| the referenced sidebar discussions (like the appeal to Elon
| to not waste his mental energy on things that aren't real
| atom problems).
| throw7 wrote:
| agreed, but you can go here as a workaround:
|
| https://threadreaderapp.com/thread/1586955288061452289.html
| PM_me_your_math wrote:
| It is painful. I got about 11 posts in before giving up. You
| described it perfectly.
| awb wrote:
| What's funny is he's arguing that moderation should be based on
| behavior, not content. And that you could identify spam if it
| was written in Loren Ipsum.
|
| If this thread and self-referential Tweeting was written in
| Loren Ipsum, it would definitely look like spam to me.
|
| So I guess I disagree with one of the main points. For me, the
| content matters much more than the behavior. Pretty sure that's
| how the Supreme Court interprets 1A rights as well. The
| frequency and intensity of the speech hasn't played a part in
| any 1A cases that I can remember, it's exclusively if the
| content of the speech violates someone's rights and then
| deciding which outcome leads to bigger problems, allowing the
| speech or not.
| dubeye wrote:
| You do sound a bit like my dad complaining about youngsters
| reading kindle.
|
| I read long twitter threads often, you get used to it
| lm28469 wrote:
| People also can get used to a diet of stale bread and bad
| soup, it doesn't mean I'm striving for a stale bread and bad
| soup diet
| gort19 wrote:
| I've heard you get used to jail too.
| pjc50 wrote:
| The core is really:
|
| > https://twitter.com/yishan/status/1586956650455265281
|
| "Here`s the answer everyone knows: there IS no principled
| reason for banning spam. We ban spam for purely outcome-based
| reasons:
|
| It affects the quality of experience for users we care about,
| and users having a good time on the platform makes it
| successful."
|
| And also this Chinese Room argument: "once again, I challenge
| you to think about it this way: could you make your content
| moderation decisions even if you didn`t understand the language
| they were being spoken in?""
|
| In other words, there are certain kinds of post which trigger
| escalating pathological behavior - more posts - which destroy
| the usability platform for _bystanders_ by flooding it. He
| argues that it doesn 't matter what these posts _mean_ or whose
| responsibility is it for the escalation, just the simple
| physics of "if you don't remove these posts and stop more
| arriving, your forum will die".
| bitshiftfaced wrote:
| I would argue that the signal-to-noise ratio outcome-based
| reason _is_ the principle: it 's off-topic. You could also
| argue another principle: you're censoring a bot, not a human.
| danwee wrote:
| I gave up after the 3rd tweet in the thread. I can't understand
| why Twitter threads are a thing.
| nfin wrote:
| my guess is that people can like () individual posts.
|
| The positive of that is:
|
| a) possibility to like () just one post, or 2, 3... depending
| of who good the thread is
|
| b) the fine granular way to like () gives the algorithm way
| better possibilities to whom to show a thread and even
| better, to first show just one intereting post out of that
| thread (also people can mores easily quote or retweet
| individual parts of a thread)
| Akronymus wrote:
| Why are you adding () after "like"? Havent seen that
| convention before, so I am unaware of the meaning.
| KMnO4 wrote:
| There may have been an emoji or Unicode character between
| the parens that was stripped by HN.
| Akronymus wrote:
| That does make quite a lot of sense.
| darrenf wrote:
| I was assuming there's a trademark symbol (tm) that had
| been stripped by HN. But since I've managed to post one,
| I'm apparently wrong!
| fknorangesite wrote:
| _Or retweet_ individual posts. It makes each one a possible
| pull-quote.
| jjulius wrote:
| Yep; everything's a soundbite.
| fknorangesite wrote:
| At this point, posting this sentiment on HN is more boring and
| irritating than the tweet-thread format could ever be.
| bee_rider wrote:
| It is also specifically called out in the guidelines as not
| helpful.
| sorum wrote:
| He inadvertently invented the Twitter mid-roll ad and we're all
| doomed now because of it
| shkkmo wrote:
| Let's take the core points at the end in reverse order:
|
| > 3: Could you still moderate if you can`t read the language?
|
| Except, moderators do read the language. If think it is pretty
| self-serving to say that users views of moderation decisions are
| biased by content but moderators views are not.
|
| > 2: Freedom of speech was NEVER the issue (c.f. spam)
|
| Spam isn't considered a free speech issue because we generally
| accept that spam moderation is done based on behavior in a
| content-blind way.
|
| This doesn't magically mean that any given moderation team isn't
| impinging free speech. Especially when there are misinformation
| policies in place which are explicitly content-based.
|
| > 1: It is a signal-to-noise management issue
|
| Signal-to-noise management is part of why moderation can be good,
| but it doesn't even justify the examples from the twitter thread.
| Moderation is about creating positive experiences on the platform
| and signal-to-noise is just part of that.
|
| The
| modeless wrote:
| It seems like he's arguing that people claiming moderation is
| censoring them are wrong, because moderation of large platforms
| is dispassionate and focused on limiting behavior no one likes,
| rather than specific topics.
|
| I have no problem believing this is true for the vast majority of
| moderation decisions. But I think the argument fails because it
| only takes a few exceptions or a little bit of bias in this
| process to have a large effect.
|
| On a huge platform it can simultaneously be true that platform
| moderation is _almost_ always focused on behavior instead of
| content, and a subset of people and topics _are_ being censored.
| mr_toad wrote:
| Rules against hate speech will disproportionately affect males.
| Does that mean they're biased against men? If so, is that even
| a bad thing?
| hackerlight wrote:
| > On a huge platform it can simultaneously be true that
| platform moderation is almost always focused on behavior
| instead of content, and a subset of people and topics are being
| censored.
|
| He made this exact point in a previous post. Some topics look
| like they're being censored only because they tend to attract
| such a high concentration of bad actors who simultaneously
| engage in bullying type behavior. They get kicked off for that
| behavior and it looks like topic $X is being censored when it
| mostly isn't.
| modeless wrote:
| That's not the same point. Again, I have no problem believing
| that what you say happens, even often. Even still, some
| topics may _really_ be censored. They may even be the same
| topics; just because there 's an angry mob on one side of a
| topic doesn't mean that everyone on that side of the topic is
| wrong, and that's the hardest situation to moderate
| dispassionately. Maybe even impossible. Which is when I can
| imagine platforms getting frustrated and resorting to topic
| censorship.
| rootusrootus wrote:
| Could also be that some objectionable behavior patterns are
| much more common in some ideological groups than others, which
| makes it appear as if the moderation is biased against them. It
| is, just not in the way they think.
| ethotool wrote:
| Nobody has the answers. Social media is an experiment gone wrong.
| Just like dating apps and other pieces of software that exist
| that are trying to replace normal human interaction. These first
| generation prototypes have a basic level of complexity and I
| expect by 2030 technology should evolve to the point where better
| solutions exist.
| dna_polymerase wrote:
| And when, ever in human history, did something improve without
| intelligent people trying to solve these issues?
| sweetheart wrote:
| I'm amazed at the number of people in this thread who are annoyed
| that someone would insert mention of a carbon capture initiative
| into an unrelated discussion. The author is clearly tired of
| answering the same question, as stated in the first tweet, and is
| desperately trying to get people to think more critically about
| the climate crisis that is currently causing the sixth mass
| extinction event in the history of the planet.
|
| Being annoyed that someone "duped" you into reading about the
| climate crisis is incredibly frustrating to activists because
| it's SO important to be thinking about and working on, and yet
| getting folks to put energy into even considering climate crisis
| is like pulling teeth.
|
| I wonder if any of the folks complaining about the structure of
| the tweets has stopped to think about why the author feels
| compelled to "trick" us into reading about carbon capture.
| Mezzie wrote:
| To add another perspective (albeit with politics rather than
| climate change):
|
| I worked in political communications for a while. Part of the
| reason it was so toxic to my mental health and I burnt out was
| that it was nearly impossible to avoid politics online even in
| completely unrelated spaces. So I'd work 40 hours trying to
| improve the situation, log in to discuss stupid media/fan shit,
| and have to wade through a bunch of stuff that reminded me how
| little difference I was making, stuff assuming I wasn't
| involved/listening, etc. It was INFURIATING. Yes, I had the
| option to not go online, but I'm a nerd living in a small city.
| There isn't enough people here that share my interests to go
| completely offline.
|
| Staying on topic helps people who are already involved in
| important causes to step away and preserve their mental health,
| which in turn makes them more effective.
| rchaud wrote:
| The simple fact of the matter is that too many people are
| either resigned to a Hobbesian future of resource wars, or
| profiting too much from the status quo to go beyond a
| perfunctory level of concern.
|
| $44bn of real-world cash was just spent on Twitter, and HN
| users alone have generated tens of thousands of comments on the
| matter.
|
| How many climate tech related stories will have the same level
| of interest?
| gambler wrote:
| _> No one argues that speech must have value to be allowed (c.f.
| shitposting)._
|
| _> Here`s the answer everyone knows: there IS no principled
| reason for banning spam._
|
| The whole threads seems like it revolves around this line of
| reasoning, which strawmans what free speech advocates are
| actually arguing for. I've never heard of any of them, no matter
| how principled, fighting for the "right" of spammers to spam.
|
| There is an obvious difference between spam moderation and
| content suppression. No recipient of spam wants to receive spam.
| On the other hand, labels like "harmful content" are most often
| used to stop communication between willing participants by a 3d
| party who doesn't like the conversation. They are fundamentally
| different scenarios, regardless of how much you agree or disagree
| with specific moderation decisions.
|
| By ignoring the fact that communication always has two parties
| you construct a broken mental model of the whole problem space.
| The model will then lead you stray in analyzing a variety of
| scenarios.
|
| In fact, this is a very old trick of pro-censorship activists.
| Focus on the speaker, ignore the listeners. This way when you
| ban, say, someone with millions of subscribers on YouTube you can
| disingenuously pretend that it's an action affecting only one
| person. You can then draw false equivalency between someone who
| actually has a million subscribers and a spammer who sent a
| message to million email addresses.
| [deleted]
___________________________________________________________________
(page generated 2022-11-03 23:00 UTC) |