|
| dcow wrote:
| I think a lot of people try to do identity things without
| understanding the fundamental nature of the problem they're
| attacking.
|
| For instance: I'm really really worried that governments are
| going to default into an understanding of digital identity that
| involves ownership of an email address and mobile phone rather
| than ability to sign a document.
|
| Or: I'm really annoyed that software services and web apps have
| clauses like "you can't use scripts or automation software to
| access our API" when a browser is just _that_. And they should
| really be enforcing rate limits and punishing abusive behavior
| whether the user clicked a button in a browser or a script did.
|
| These type of things are not rooted in a fundamental
| understanding of identity, they're sloppy stop-gaps. Despite all
| its faults, this is one of the reasons I'm super excited about
| WebAuthN. At least is make common the idea that an identity is a
| cryptographic secret and not a "possession of an email and
| phone". We really really need to dig out of this "email address
| identifies you" hole.
|
| Anyway it's exciting to see people discuss the topic more
| formally. It gives me hope that we can ultimately get to a better
| understanding of digital identity and not be trying to solve
| impossible problems by chasing an impossibly perfect solution
| that verifies all 3 tenets that actually doesn't exist and making
| a big mess of things because nobody stopped to ask or understand
| the scope of what we _should_ be trying to do.
|
| Ultimately identity should be empowering not oppressive. And
| right now it feels more like services oppress people into all
| sorts of weird requirements like having an email, getting a phone
| verification code, running software on a device that has an
| integrity attestation framework, etc. rather than trust them and
| punish bad behavior.
|
| I want my government (and web services, but especially my
| government) to trust me and punish bad behavior, not treat me
| like an untrusted bot that needs to be managed and continuously
| verified.
| OfSanguineFire wrote:
| > an understanding of digital identity that involves ownership
| of an email address and mobile phone rather than ability to
| sign a document.
|
| Buying a SIM card in a great many countries today already
| requires showing state-approved ID and then signing a form that
| the shop clerk prints out. So, ownership of a mobile phone
| number does mean being able to sign a document. Are you
| concerned about SIM-jacking? I admit, I find the thrust of your
| post difficult to follow.
| Brian_K_White wrote:
| It absolutely does not, since a sim is just an object which
| can be acquired any number of ways than the intended one.
| They can also be faked and not even possessed at all, since
| all the server sees is some data. The server (the phone
| companies hardware) is not a notary public watching you sign
| something after verifying that your ID matches your person.
|
| This is exactly the grossly naive assumption they are talking
| about people treating as though it had any substance at all
| when it has practically none.
| version_five wrote:
| >Ultimately identity should be empowering not oppressive.
|
| I agree completely. Are there any examples of that though,
| where it is empowering?
|
| Talking about government (or any bureaucracy) there is no
| chance of empowerment, at least from the administrative arm.
| Software is written by lowest bidders for the convenience of
| administrators, to help them treat people like cattle. This can
| only change when we complain to real politicians who could
| potentially advocate for empowerment. As long as bureaucrats
| are in charge, it only gets worse.
| dontupvoteme wrote:
| To these people, anonymity is the problem. being easily doxxed
| and public is considered a _good_ thing to them.
| Morizero wrote:
| Title should read "sentience, location, and uniqueness", which
| the paper states are the three key properties of identity
| someguy7250 wrote:
| IMO, the meat of this paper is in section 4.3 and 4.4.
|
| And I cannot say for sure, but the formal proof of 4.4 basically
| summarizes the same points pointed out in 4.3.
|
| Most of these are not inherently mathematical problems but a
| social one.
|
| > Verifying sentience is a fuzzy concept. While they can be bound
| together momentarily as we see in [66 ], the binding is very
| easily decoupled.The verified user might choose to sell off their
| uniqueness identifier at time period t + 1 if the verification
| which binds sentience with uniqueness ends at t.
|
| Basically, people can sell identities
|
| ----
|
| What really concerns me though, is how much and how often this
| paper discusses DRM, or in their own words, a "trust anchor"
|
| > With the assumed threat model in our case, the lack of inherent
| trust in the user only compounds the unreliability of the model
| without any trust anchor.
|
| > Assuming a proof of location is for a mobile device, rather
| than a particular human being, then associating the proof of
| uniqueness obtained under such a condition, i.e., without the
| involvement of a trust anchor, is unreliable.
|
| I know that the authors aren't directly calling for more
| centralized trust. But given recent development at Google, we all
| know how the readers would think
| MichaelZuo wrote:
| > > Verifying sentience is a fuzzy concept. While they can be
| bound together momentarily as we see in [66 ], the binding is
| very easily decoupled.The verified user might choose to sell
| off their uniqueness identifier at time period t + 1 if the
| verification which binds sentience with uniqueness ends at t.
|
| > Basically, people can sell identities
|
| I can see why Sam Altman believes iris scans are the future,
| it's definitely much more cumbersome to 'sell off' your iris.
| Especially if it needs to be rescanned on a daily basis or
| sooner.
| BSEdlMMldESB wrote:
| we cannot live in a society where I must demonstrate to another
| human that I'm human with a piece of paper just because the other
| human is a bureaucrat with a computer.
|
| this is an 'online only' problem
| chayesfss wrote:
| [dead]
| RcouF1uZ4gsC wrote:
| Nit Pet Peeve: Confusing sapience and sentience.
| Scaevolus wrote:
| On the internet, the problem is rarely dogs getting up to
| mischief.
| rocketbop wrote:
| That's a nice line but could you tell me what it means?
| willturman wrote:
| I took it as a tongue in cheek reference to this:
|
| https://en.wikipedia.org/wiki/On_the_Internet,_nobody_knows
| _...
| ketzo wrote:
| We consider many animals to be "sentient" -- we recognize
| that they are to some degree conscious, in this case
| meaning that they have the capability to sense, perceive
| their world, and have some kind of emotion about the things
| they perceive
|
| "Sapience" on the other hand, is essentially "human-level
| intelligence, consciousness, and self-awareness" (from
| _homo sapiens_ )
| colechristensen wrote:
| I'm starting to doubt the Internet nitpickers' distinction and
| definition of the two.
| DougMerritt wrote:
| Their distinction follows dictionaries that I've looked at,
| but common usage is clearly diverging from that.
|
| It's unclear to me at what point we should stop saying that
| lots of people are using words incorrectly, and start saying
| that lots of dictionaries are sticking with outmoded
| definitions, but looking at the past, transitions certainly
| occur sometimes.
| greiskul wrote:
| There are lots of people that have this view that there is
| "objectively correct" language, and that you can find it in
| grammar books and dictionaries. Any linguist worth their
| salt knows that is a completely outdated and classist way
| of studying language. Dictionaries are supposed to be
| updated acording to how people use language, not people be
| "corrected" to try to follow the dictionary. While language
| learners speak a language "wrong", any group of native
| language speakers has rights of ownership of their language
| as any other group, and as long as they are communicating
| in a way that they can understand each other, they are
| never wrong.
| version_five wrote:
| > Trolls, bots, and sybils distort online discourse and
| compromise the security of networked platforms.
|
| In some sense I think the authors' hypothesis is a good thing, ie
| that you can never fully verify someone online. It prevents
| wholesale algorithmic management of people, which is really what
| governments and companies would like to do, and forced some level
| of human contact or at least intervention. I expect it's
| inevitable that they'll find a way to offload the problem onto
| the citizen, for the most part they already have, but I'm
| personally glad it's impossible to assign me some kind of
| infallible identifier that will let me be _The Castle_ style
| abused remotely and without recourse.
| dontupvoteme wrote:
| Half the internet thinks troll means "person who disagrees with
| me or (dis)likes thing i (dis)like" - I'm distrustful of people
| who paint them as a big problem on the internet.
|
| Is this how the western Social Credit system begins?
| dfhanionio wrote:
| I think we've lost this battle. "Troll" means roughly "person
| who behaves badly". The word has become useless.
|
| I know language changes over time. This was clearly a change
| for the worse.
| pphysch wrote:
| Trolls, as in troll farms, astroturfing, organized influence
| campaigns, etc. are absolutely a serious problem for any
| society that pretends to care about democracy.
|
| Especially in the LLM era, where the marginal cost of adding
| another artificial "voice" approaches $0.
| dontupvoteme wrote:
| That's not what troll means. Astroturfing as you said is much
| better.
|
| They're going to be pushing for WEI/Attestation/Requiring
| easily doxxable accounts using this a bogeyman, aren't they?
___________________________________________________________________
(page generated 2023-08-11 23:00 UTC) |