|
| 1970-01-01 wrote:
| Neural Radiance Fields (NeRF)
| Workaccount2 wrote:
| I know there are a lot of groups working on how to prevent AI
| disruptions to society, or how to mitigate their impact, but are
| there any groups working on how to adapt society to a full blown
| unchained AI world?
|
| Like throw out all the safeguards (which seems inevitable) and
| how does society best operate in a world where no media can be
| trusted as authentic? Or where "authentic" is cut from the same
| cloth as "fake"? Is anyone working on this?
| Agamus wrote:
| One thing we should be doing is supporting critical thinking at
| the high school and university level. Unfortunately, it seems
| we have been dedicated to the opposite of this for about 50
| years, at least in the US.
| Der_Einzige wrote:
| Critical thinking is the most overrated skill ever.
|
| "Critical theorists" are the people who fetishize "critical
| thinking" and all it got them was to embrace cultural
| Marxism.
|
| Constructive thinking is far better than learning how to shit
| on people, as the skill of critique teaches us...
| gknoy wrote:
| At the risk of falling for a joke, I'm not sure "critical
| thinking" means what you think it means. It just means
| thinking objectively about things before making judgments,
| it has nothing to do with criticizing people. The things
| one criticizes are our own beliefs and reasons for
| believing them.
|
| What do I believe? Why do I believe that? Why do I feel
| that evidence supports that belief, but not this one? For
| example, I can explain in a fair bit of detail why I
| believe that the Apollo landing was not faked. I wouldn't
| normally bother to explain those reasons, but all of them
| are based on beliefs and evidence that I've read about, and
| most of those beliefs are subject to reversal should
| counter-evidence surface.
| whimsicalism wrote:
| This is reasoning by word chaining.
| Agamus wrote:
| I think of critical thinking as the art of being critical
| toward oneself when one is thinking.
|
| In other words, when I read something and hear myself
| think, "oh yeah, that sounds right", there is another part
| of my mind that thinks, "maybe not".
|
| Critical thinking is precisely what could have spared us
| from all of that 'cultural marxism' you mentioned, or at
| least, to do it in a way that is... constructive.
| strohwueste wrote:
| What about nft-based camera recording?
| irjustin wrote:
| This has been discussed many times and it doesn't work.
|
| Simple answer is I can just record a deep fake and get it
| cryptographically signed.
| andruby wrote:
| If you trust a person (or source) and they have a private
| key that they can properly secure, they could always sign
| their material with that key. That would prove that the
| source provided that material.
|
| A blockchain could be a way to store and publish that
| signature & hash.
|
| It can't say "this is real", it can only say "that
| signature belongs to source X".
| blamestross wrote:
| > A blockchain could be a way to store and publish that
| signature & hash.
|
| yes but it would be a bad one. We have multiple key
| distribution mechanism that are better for the use case.
| intrasight wrote:
| I disagree. The key alone is not sufficient nor secure.
| We will need crowdsourced validity data as well. We need
| a zero-trust model - and I too believe that blockchains
| will play a role.
| notfed wrote:
| If a video is already cryptographcally signed, then you
| can safely distribute the signature on an untrusted
| channel.
|
| Adding Blockchain into the mix is superfluous, and
| destroys scalability.
| intrasight wrote:
| We don't watch signature keys - we watch videos.
|
| The TV will have to match every short segment - perhaps 5
| seconds of video - against a blockchain which scores the
| validity of that segment - and of course looking back to
| it's original source. Signing the whole video is
| necessary but not sufficient.
|
| But yes, this is going to be resource-intensive.
| blamestross wrote:
| > against a blockchain which scores the validity of that
| segment
|
| Why not just allow a cert with that information to be
| delivered alongside the video? Where would the "score"
| come from?
| blamestross wrote:
| Zero trust models don't exist and the laws of physics
| (probably) don't provide for them. (Materialism is a real
| problem in physics nowadays)
| endtime wrote:
| I think most of those people believe that humans will no longer
| exist in a "full blown unchained AI world".
| cameronh90 wrote:
| I suspect we'll need to return to the idea of getting our news
| from trusted sources, rather than being able to rely on videos
| on social media being trustworthy.
|
| Technically, we could try and build a trusted computing-like
| system to ensure trust from the sensor all the way to a signed
| video file output, but keeping that perfectly secure is likely
| to be virtually impossible except in narrow situations, such as
| CCTV installations. I believe Apple may be attempting to do
| things like this with how Face ID is implemented on iPhone, but
| I suspect we'll always find ways to trick any such device.
| wongarsu wrote:
| 80% of the problem could be solved with a reliable signature
| scheme that allows some remixing of video content. So if CNN
| publishes a video, signed with their key so it's verifiably
| CNN, we need the ability to take a 20 second bit of it and
| still have a valid key attached that verifies that the source
| is CNN (without trusting the editor). Then you can share
| clips, remix it, etc, and have integration in social media
| that attests the source.
| _tom_ wrote:
| Once you remix it, it's no longer reliable. So, you don't
| want it signed if it's modified.
| intrasight wrote:
| My plan to solve this "20 second bit of it" is that it's
| done at the analog hole. Whatever is painting those pixels,
| a smart TV for instance, will be coordinating with cloud
| services to fingerprint at a relatively high temporal
| resolution - maybe 5 seconds. The video itself is the
| signature. But we will need either trusted analog hole
| vendors or some trusted non-profit organization - or likely
| both. I think that "viewing" will be delayed by perhaps 30
| seconds to allow for that signature analysis. These smart
| TVs will overlay a scorecard for all displayed content, and
| owners will be able to set device scorecard thresholds such
| that low-scoring content will be fuzzed out.
| dTal wrote:
| I sincerely hope this dystopian vision of the future is
| satire, but it's already a worrying sign of the times
| that I'm not sure.
| wazoox wrote:
| There are more dangereous AI than deepfakes. Blackrock 10
| trillions investments are driven by an AI, Aladdin. It was also
| sold to some other investors, and controls about 21$ trillion
| globally. It has basically the power to drive markets worldwide.
| It's a systemic problem nobody talks about...
| astrange wrote:
| Blackrock sells passive index funds. There's no room for an AI
| to make decisions there, so it probably doesn't do anything
| interesting.
| smiddereens wrote:
| nix0n wrote:
| That's not an AI problem, it's a concentration of wealth
| problem.
|
| Giving that power to a person or group of people would be
| almost as bad.
| kobalsky wrote:
| why do websites feel the need to hijack the browser's scrolling
| logic?
|
| this is very annoying to browse on chrome, but it works well on
| firefox.
| sigspec wrote:
| Agree. It's a jolt to scrolling expectations
| arky527 wrote:
| Very much agreed. It just results in a terrible UX when 98% of
| other websites have a standard scrolling mechanism
| [deleted]
| jdthedisciple wrote:
| The bad news: This can and obviously will be abused - be it by
| the secret services or hackers.
|
| The good news, I suppose: As fast and scarily as the tech to fake
| things is evolving, so is presumably the tech that detects fakes.
| _tom_ wrote:
| Technology to detect fakes necessarily lags a bit from
| technology to create fakes.
|
| In general, you need to have examples of a type of fake to
| detect it.
| RubyRidgeRandy wrote:
| Something I've wondered lately is what will life be like in a
| post-truth society? we already see examples of this now where a
| large number of people get their news from fake memes on
| facebook. There are huge swathes of people who live in their own
| make-believe world, like those believing wholeheartedly that the
| 2020 election was stolen.
|
| What will life be like when you can't trust any video or
| interview you see because it could be completely fake? How long
| before someone uses this technology to frame someone for a crime?
| Could the FBI create a deepfake of a cartel leader meeting with
| them and leak it so they think he's a snitch?
|
| I don't think we'll have the ability to handle this kind of tech
| responsibly.
| hutzlibu wrote:
| "I don't think we'll have the ability to handle this kind of
| tech responsibly."
|
| I do not think so either, but so far we survived 75+years with
| nukes around.
|
| But you can argue, it was mainly by chance. Technological
| progress _is_ awesome, but our societies cannot keep up yet.
| They will have to do heavy transition anyway, or perish. Or
| rather, we are in the process of transition. 20 years ago most
| people did not really know, what the internet is, now most are
| always online. Data mining, personalised algorithms for ad
| exposing, ..
|
| So deepfakes are a concern, but not my biggest. Rather the
| contrary, when people see how easy it is to fake things, they
| might start developing a healthy sceptism to illuminating
| youtube videos.
| munificent wrote:
| I think we'll solve it the same way we solved similar
| transitions when text and image faking became easy: provenance.
|
| For many years now, most have understood that you can't take
| text and images as truth because they can easily be simulated
| or modified. In other words, the media itself is not self-
| verifying. Instead, we rely on knowing where a piece of media
| came from, and we associate truth value with those
| institutions. (Of course, people disagree on which institutions
| to trust, but that's a separate issue.)
| kbenson wrote:
| In other words, the same way we dealt with information before
| photographs and videos were invented. The answer to how to we
| deal with the fact that images and videos can't be trusted is
| to look at what we did before we relied on them. If we're
| smart about it we'll try to pick out the good things that
| worked and try to build in safeguards (as much as possible)
| against the things that didn't, but I won't hold my breath.
| We're already heading back towards some of the more
| problematic behavior, such as popularity or celebrity
| equating to trust.
| pfisherman wrote:
| I think that people will adapt. Humans are very clever and have
| been evolutionarily successful because of the ability to adapt
| to a wide range of environments.
|
| Think about how devastatingly effective print, radio, and
| television propaganda at the time each medium was widely
| adopted compared to how effective they are now. They still
| work, but for the most part people have caught on to the game
| and adjusted their cognitive filters.
|
| My guess is that we will see a bifurcation of society into
| those who are able to successfully weed out bullshit from those
| who can't. The people who are able to process information and
| build better internal models of the world will be more
| successful, and eventually people will start imitating what
| they do.
|
| Edit: I do think that these tools coupled with widespread
| surveillance and persuasion tech (aka ad networks), have set up
| conditions where the few can rule over the many min a way that
| was not possible before. I do think some of the decentralized /
| autonomous organization tech - scaling bottom up decision
| making to make it more transparent and efficient - is a
| possible counter. Imo, this struggle between technological
| mediated centralization and top down enforcement and control vs
| decentralization and bottom up consensus will be the defining
| ideological struggle of our time.
| azinman2 wrote:
| I think you're underestimating the power of cognitive biases
| and violence from those who only believe the information that
| want to hear.
| pfisherman wrote:
| I think there are quite a few recent historical examples of
| this - WW2, US invasion of Iraq, Russian Invasion of
| Ukraine, etc.
|
| However there is a price to pay for operating on beliefs
| that do not align with reality. It's why almost all
| organizations engage in some form of intelligence
| gathering. Those who are at an information disadvantage get
| weeded out.
|
| Philip K Dick has a great quote "Reality is that which,
| when you stop believing in it, doesn't go away."
| adhesive_wombat wrote:
| A skeptic (i.e. someone who cares to verify) not being able to
| trust media because it might be fake is only a minor problem as
| long as you have at least one trusted channel.
|
| The president, say, can just release the statement on that
| channel and it can be verified there (including
| cryptographically, say by signing the file or even using
| HTTPS).
|
| If you lose that channel, then you're pretty much screwed
| because you'll never know which one is the real president. But
| there are physical access controls on some channels, say the
| Emergency Alert System, which can be used to bootstrap a trust
| chain.
|
| What will be much more possible is that someone who will not
| check the veracity of the message will take it at face value
| without bothering to validate it. This is your news-via-
| Facebook crowd.
|
| At that point, it's less a technical issue than simply people
| don't want to know the truth. No amount of fact-checking and
| secure tamper-proofing of information chains of custody will
| help that.
| toss1 wrote:
| Agree, and it's even worse than that
|
| An incredibly small minority of people even understand your
| phrase with any actual fidelity and depth of meaning:
|
| >>it can be verified there (including cryptographically, say
| by signing the file or even using HTTPS)
|
| Even fewer of that microscopic minority have and understand
| how to use the tools required to verify the video
| cryptographically, AND even fewer know how to fully validate
| that the tools themselves are valie (e.g., not compromised by
| a bogus cert).
|
| Worse yet, even in the good case where everyone is properly
| skeptical, and 90+% of us figure out that no source is
| trustworthy, the criminals have won.
|
| The goal of disinformation is _not_ to get people to believe
| your lie (although the few useful idiots who do may be a nice
| bonus).
|
| The goal of disinformation is to get people to give up an
| even seeking the truth - just give up and say "we can't know
| who's right or what's real" -- that is the opening that
| authoritarians need to take over governments and end
| democracies.
|
| So yes, this is next-level-screwed material.
| adhesive_wombat wrote:
| > AND even fewer know how to fully validate that the tools
| themselves are valie (e.g., not compromised by a bogus
| cert).
|
| Kind of, but once you have _a_ single verifiable channel
| back to the source (in this case, some statement by the
| president) it 's now possible for anyone to construct a web
| of trust that leads back to that source. For example,
| multiple trustworthy independent outlets reporting on the
| same statement in the same way, providing some way to
| locate the original source. This is why new articles that
| do not link to (on the web) or otherwise unambiguously
| identify a source wind me up. "Scientists say" is a common
| one. It's so hard to find the original source from such
| things.
|
| This falls over in two ways: sources become non-independent
| and/or non-trustworthy as an ensemble. Then you can't use
| them as an information proxy. This is what is often claimed
| about "the mainstream media" _and_ the "non-mainstream
| media" by the adherents if the other. All the fact checks
| in the world are worthless if they are immediately written
| of y those they are aimed at as lies-from-the-system.
|
| The second way is that people simply do not care. It was
| said, it sounds plausible, and they want to believe it.
|
| So I would say that actually the risks here are social, not
| technological. Granted, perhaps a deepfake-2'd video might
| convince more people than a Photoshopped photo. The core
| issue isn't the quality of the fake, it's that a
| significant number of people simply wouldn't care if it
| _were_ fake.
|
| Doesn't mean we're not screwed, just not specifically and
| proximally because of falsification technology, that's
| accelerant but not the fuel.
| toss1 wrote:
| >>that's accelerant but not the fuel.
|
| Yes, indeed! Which is why I'm having so much trouble with
| ppl proposing technological solutions - technically it
| might solve the problem in some situations, but the
| bigger problem is indeed some combination of general
| confusion, highly adversarial information environment
| laden with disinformation, and people's all-too-frequent
| love of confirmation bias and willingly believing BS and
| overlooking warning signs.
|
| I hope we can sort it...
| efrbwrh wrote:
| gadders wrote:
| >> There are huge swathes of people who live in their own make-
| believe world, like those believing wholeheartedly that the
| 2020 election was stolen
|
| There are also those that wholeheartedly believe Trump colluded
| with Russia to win the 2016 election, or that the Steele
| dossier was factual.
| dTal wrote:
| These are not equivalent. Russia _did_ interfere, there
| _were_ links between Trump and Russia, therefore there is
| circumstantial evidence that collusion occurred, sufficient
| to trigger a widely publicized investigation. The allegations
| of election fraud in 2020 however are 100% alternate universe
| yarns spun for political gain with no basis in fact
| whatsoever.
| decafmomma wrote:
| Here's the thing though: in theory, we should already be
| skeptical of video and audio evidence on its own.
|
| Most of our institutions, in theory, do not focus on single
| mediums for assessing veracity of truth. The strength of claims
| and our ability to split the difference between noise and truth
| comes down to corroboration. How many other sources strength
| and work consistently with a claim? That's, in theory, how law
| enforcement, intelligence, and reporting should work.
|
| In practice, there are massive gaps here and people's attention
| -> decision is lower than ever.
|
| I don't think it's impossible for us to handle deep fakes, but
| I sense the same fear you have. I think ultimately it is more
| about our attention spans, and the "urgency" we feel to act
| quickly, that will be more of our down fall than the ability to
| produce fakes more easily.
|
| You don't in fact need a convincing fake to create a powerful
| conspiracy theory. Honestly you only need an emotional
| provocation, maybe even some green text on an anonymous web
| form.
| berdon wrote:
| Reminds me of Stephenson's "Fall; Or Dodge in Hell" where all
| digital media is signed by their anonymous author and public
| keys become synonymous with identities. An entire industry of
| media curation existed in the book to handle bucketing media as
| spam, "fake", "true", interesting, etc.
| narrator wrote:
| Surprise Plot Twist: Maybe we're already living in a post-truth
| society and you are still sure you know what the truth is. How
| would you even know that what you were ferociously defending as
| the truth wasn't a lie? What makes you think you're not smart
| enough to not fall for lies?
|
| Largely, I think most people's means of finding the truth is
| just to take a vote of the information sources they find
| credible and go with whatever they say. I was talking with some
| friends about the California propositions a while back. Some of
| them were not clear cut which way we should vote on them.
| Instead of discussing the actual issue, people just wanted to
| know what various authority figures thought. These were not
| dumb people I was talking to, and I used to remember an era in
| the 90s maybe where you could actually have a reasoned debate
| and come to the truth that way. It seems that's obsolete these
| days since nobody seems to agree on the basic facts about
| anything.
| cwkoss wrote:
| Disinformation is very common in traditional news media. This
| technology just democratizes this tool and allows anyone to
| engage in it.
|
| There will probably be a net increase in disinformation, but
| citizens will likely also get better at being skeptical of
| currently unquestioned modes of disinformation.
| corrral wrote:
| > There will probably be a net increase in disinformation,
| but citizens will likely also get better at being skeptical
| of currently unquestioned modes of disinformation.
|
| Russia seems to be farther along this path than we are and
| every account I've read of their experience of disinfo
| isn't that they got better at seeking the truth, but
| instead just assume everything's a lie & nothing's
| trustworthy, and disappear into apathy.
| wildmanx wrote:
| > I don't think we'll have the ability to handle this kind of
| tech responsibly.
|
| Makes you also think whether anybody from the Hacker News crowd
| working on any contributing tech is acting ethically
| responsibly. For myself, I have answered this question with
| "no", which rules out many jobs for me, but at least my kids
| won't eventually look me in the eye and ask "how could you?"
|
| Sure it's cool tech. But so was what eventually brought us
| nuclear war heads.
| brightball wrote:
| Honestly, when people have gone so far as to redefine common
| words it makes it really difficult to have conversations with
| people.
|
| 1. Hate going from one of the most visceral and obsessive
| emotions that exists to being tossed around at everything
|
| 2. The advent of your truth instead of the truth
|
| 3. Constant injections of "x" into every existing word
| apparently?
|
| "Womxn in Mathematics at Berkeley" - https://wim.berkeley.edu/
|
| This is all before we get people to understand that having a
| discussion where some of their points might not be a strong as
| they think they are...somehow means you're attacking.
|
| The world that we have created for ourselves over the last 20
| years is weird.
| jkaptur wrote:
| Wasn't this bridge crossed when Photoshop became popular?
| bberrry wrote:
| It takes some level of skill to produce a convincing
| Photoshopped image.
| BoorishBears wrote:
| Does that matter when they stakes are as high as these
| arguments always claim?
|
| If we're doomsaying about a "post-truth society", we're
| talking about high-stakes society-scaled skullduggery.
|
| If you're aiming for that level of disruption, easy
| deepfakes vs hard video/photo editing is not an issue,
| getting people to trust your made up chain of custody is.
|
| -
|
| This is like when people worry about general AI becoming
| self-aware and enslaving mankind... the "boring version" of
| the danger is already happening: ML models being trained on
| biased data are getting embedded in products that are core
| to our society (policing, credit ratings, etc.), and that's
| really dangerous.
|
| Likewise, people worry about being able to easily make fake
| news, when the real danger is people not being equipped to
| evaluate the trustworthiness of a source... and that's
| already happening.
|
| You don't even need a deepfake, you tweet that so and so
| said X, write a fake article saying they said X, amplify it
| all with some bots, and suddenly millions of people believe
| you.
| kache_ wrote:
| The one unwavering thing about technology is that it doesn't
| stop advancing, and we can't use it responsibly.
|
| The good news is that we've been going through rapid, rapid
| tech advancements the past 50 years and we're still here.
| mckirk wrote:
| The thing I don't like about these 'well people have been
| complaining about this forever' arguments is that, it's
| entirely possible to have a) people pointing at an issue for
| a long time and b) still have that issue get progressively,
| objectively worse over time.
|
| There's that example of people pointing out smartphones might
| be bad for children, then someone counters with 'well thirty
| years ago people complained about children reading too much
| instead of playing outside', with the implication being:
| adults of all ages will find some fault with newer
| generations, and not to worry so much.
|
| But just because it is true that adults will probably always
| worry about 'new, evil things' corrupting the youth, this
| does not mean that the 'new, evil things' aren't getting
| _objectively more dangerous_ over time. Today adults would be
| happy if children still had the attention span and motivation
| necessary to read a book. They'd be happy if they themselves
| still had it, actually.
|
| Graphing the progress of a sinking ship and pointing out that
| the downwards gradient has been stable for a while now and we
| should therefore be okay is generally not a useful
| extrapolation, I would say.
| cupofpython wrote:
| >Graphing the progress of a sinking ship and pointing out
| that the downwards gradient has been stable for a while now
| and we should therefore be okay is generally not a useful
| extrapolation,
|
| I like this analogy. I've had similar thoughts for a while
| too. Granted I also saw some research that society has been
| objectively _getting better_ in a lot of areas people
| _think_ is getting worse (like violence, specifically
| police abuse) compared to the past. theoretically this is
| because we have a lot more information now than before, so
| smaller occurrences are generating a larger impression.
|
| that said, I still very much agree with your point and that
| it is very applicable to _specific_ individualized issues.
| Saying that people have been concerned for a while and
| nothing bad has happened yet is accurate for the situation
| where nothing bad will happen, AND the situation that it
| was bad then and is worse now, AND the situation where we
| are approaching a tipping point / threshold where the bad
| will start.
| andruby wrote:
| > The one unwavering thing about technology is that it
| doesn't stop advancing, and we can't use it responsibly.
|
| While I think that is true in general, I am optimistic that
| we've seen at least one technology where we were able to
| constraint ourselves from self-destruction: nuclear weapons.
|
| Of course, nuclear weapons tech is not in reach of
| individuals or corporations, which means there are only a
| handful of players in this game-theory setting.
| sitkack wrote:
| > rapid tech advancements the past 50 years and we're still
| here
|
| This is a tautology. At some point the music stops and you
| aren't here to make the argument that we are still here.
| cupofpython wrote:
| not entirely tautological. the probability that something
| bad happens tomorrow if we do X today for the first time is
| very different than the probability that something bad
| happens tomorrow if we do X today GIVEN weve been doing X
| every day for 50 years.
|
| It is still insufficient to say nothing bad will happen, of
| course
| sitkack wrote:
| Thats not the argument. The one you are making is the
| same one people make when they conflate weather and
| climate.
| cupofpython wrote:
| Conditional probability applies to many things
| beisner wrote:
| So there are information theoretic ways to certify that media
| is genuine, if you assume trust at least somewhere in the
| process. Basically just cryptographic signing.
|
| For instance, a camera sensor could be designed such that every
| image that is captured on the sensor gets signed by the sensor
| at the hardware level, with a certificate that is embedded by
| the manufacturer. Then any video released could be verified
| against a certificate provided by the manufacturer. Of course,
| you have to trust the manufacturer, but that's an easier pill
| to swallow (and better supported by our legal framework) than
| having to try and authenticate each video you watch
| independently.
|
| There are issues that can arise (what if I put a screen in
| front of a real camera??, what if the CIA compromises the
| supply chain???), but at the end of the day it makes attacks
| much more challenging than just running some deepfake software.
| So there are things that can be done, we're not destined for a
| post truth world where we can't trust any media we see.
| notfed wrote:
| > a camera sensor could be designed such that every image
| that is captured on the sensor gets signed by the sensor at
| the hardware level
|
| A hardware-based private key like this will inevitably be
| leaked.
| beisner wrote:
| Each sensor could have a unique cert
| [deleted]
| pahn wrote:
| I'll probably get downvoted into oblivion for mentioning
| blockchain tech, but this might at least help, maybe not in
| its current form but... I did not follow that project, but
| there do exist some concepts in this direction, e.g. this:
| https://techcrunch.com/2021/01/12/numbers-protocols-
| blockcha...
| beisner wrote:
| The idea of having a public record that attests to _when_
| an event happened is interesting, although not sure it has
| to be blockchain for it to be useful.
| bogwog wrote:
| That's helpful for the legal system, but it's not going to
| help for attacks designed to cause mass panic/unrest/revolts.
| If another US president wants to attempt a coup, it'll be
| much more successful if they're competent and determined
| enough to produce deepfakes that support their narrative.
|
| The only way to prevent stuff like that is to educate the
| public and teach people how important it is to be skeptical
| of anything they see on the internet. Even then, human
| emotions are a hell of a drug so idk how much it'd help.
| notahacker wrote:
| US Presidents have had the ability to make false claims
| based on video of something completely different, create
| material using actors and/or compromised communications,
| stage events or use testimony that information has been
| obtained via secret channels from appointees heading up
| agencies whose job it is to obtain information via secret
| channels for a long time now.
|
| If anything, recent events suggests the opposite: deepfakes
| can't be _that_ much of a game changer when an election
| candidate doesn 't even have to _try_ to manufacture
| evidence to get half the people who voted for him to
| believe his most outlandish claims.
| [deleted]
| Zababa wrote:
| > What will life be like when you can't trust any video or
| interview you see because it could be completely fake?
|
| I don't understand your point. This has been the case for a
| while. People were editing photos to remove people in the time
| of Stalin. And even before that, you can lie, write false
| records, destroy them.
| tomgraham wrote:
| The good news is that public awareness of potentially
| manipulated media is on the rise. Coupled with good laws, good
| detection tech - public awareness and media literacy is
| important. At Metaphysic.ai, we created the @DeepTomCruise
| account on Tiktok to raise awareness.
|
| We also created www.Everyany.one to help regular people claim
| their hyperreal ID and protect their biometric face and voice
| data. We think that the metaverse of the future will feature
| the hyperreal likenesses of regular people - so we all have to
| work hard today to empower people to be in control of their
| identity.
| SantalBlush wrote:
| Creating yet another product to monetize is not a solution,
| it's just more of the problem. It incentivizes a perpetual
| arms race between fabrication and verification at the cost of
| everyday users. No thanks.
| tomgraham wrote:
| It is free. Protecting individuals' right is more important
| that making money!
| random-human wrote:
| Free but collecting and storing peoples biometric data on
| your servers (per the FAQ). How do I know it's not a
| clearview ai clone but with easier data gathering? What
| is that saying about what the real product is if
| something is free?
| fxtentacle wrote:
| I believe we'll go back to trusting local experts that you can
| meet in person to confirm that they are not a bot.
|
| Because anything online will be known to be untrustworthy. Most
| blogs, chat groups and social media posts will be spam bots.
| And it'll be impossible for the average person to tell the
| difference between chatting with a bot and chatting with a
| human. But humans crave social connections and intimate
| physical contact. So people will get used to the fact that
| whoever you meet online is likely fake and so they'll start
| meeting people in the real world again.
|
| I also predict that some advanced AIs will be classified as
| drugs, because people get so hooked on them that it destroys
| their life. We've already banned abusive loot box gambling
| mechanics in some EU countries, and I think abusive AI systems
| are next. We'll probably also age-limit generative AI models
| like DALL-E, due to their ability to generate naughty and/or
| disturbing images.
|
| But overall, I believe we will just starting to treat
| everything online as fake, except in the rare case that you
| message a person which you have previously met in real life (to
| confirm their human-ness).
| newswasboring wrote:
| Your second paragraph is very intriguing. I never really
| thought about this. I wonder if people will actually be able
| to restrict usage though. Its software, and historically it
| has been hard to restrict it. Of course cloud based systems
| have two advantages, software is hidden behind the API and
| they have really powerful systems. But the former requires a
| single lapse in security to leak and latter just requires
| time till consumer hardware can catch up. If I use past data
| to predict future (which might be a bad idea in this case),
| it might be almost impossible to restrict AI software.
| formerkrogemp wrote:
| I've heard this for years, but software will eventually
| face its own regulation and barriers to entry much as
| healthcare and accounting have theirs.
| userabchn wrote:
| I suspect that many chat groups (such as Facebook groups),
| even small niche ones, already have GPT-3-like bots posting
| messages that seem to fit into the group but that are trained
| to provide opinions on certain topics that align with the
| message that the organisation/country controlling them wishes
| to push, or to nudge conversations in that direction.
| fxtentacle wrote:
| Aww, that reminds me of the good old IRC days where
| everyone would start their visit with !l to get a XDCC bot
| listing.
| fartcannon wrote:
| I want to agree with you, deeply, but the number of people
| who fall for simple PR/advertising in today's world suggests
| otherwise.
|
| I think we'd have a chance if they taught PR tricks in
| schools starting at a young age. Or at minimum, if websites
| that aggregate news would identify sources that financially
| benefit from you believing what they're saying.
| corrral wrote:
| I've long thought that high school should require _at
| least_ one course that I like to call "defense against the
| dark arts" (kids still dig Harry Potter, right? Hahaha).
|
| The curriculum would mostly be reasoning, how to spot
| people lying with graphs and statistics, some rhetoric, and
| extensive coverage of Cialdini's _Influence_. The entire
| focus would be studying, and then learning to spot and
| resist, tricks, liars, and scam artists.
| jayd16 wrote:
| When you say "everything online" do you mean every untrusted
| source? Surely the genie is out of the bottle on
| communication over the web. That local source will have a
| website. Because of that I feel like we'll always just have
| to be vigilant, just like we always should have been. After
| all, local scams still exist. Real humans are behind the
| bots.
| fxtentacle wrote:
| > Real humans are behind the bots.
|
| Yes, but those humans are usually anonymous and on the
| other side of the planet which makes them feel safe. And
| that allows them to be evil without repercussions.
|
| Back in the days, I went to LAN parties. If someone spotted
| a cheater, they would gang up with their friends and
| literally throw the offender out of the building. That was
| a pretty reliable deterrent. But now with all games being
| played online, cheating is rampant.
|
| Similarly, imagine if those Indian call centers that scam
| old ladies out of their life savings were located just a
| quick drive away from their victims' families. I'm pretty
| sure they would have enough painful family visits such that
| nobody would want to work there.
|
| Accordingly, I'm pretty sure the local expert would have
| strong incentives to behave better than an anonymous online
| expert would.
| jayd16 wrote:
| To argue that scams didn't exist or weren't a problem
| before the internet is pretty indefensible, no matter the
| anecdotes.
| fxtentacle wrote:
| I was merely trying to argue that scams within a local
| community would be less severe than scams between
| strangers, because they are easier to punish and/or
| deter.
| wongarsu wrote:
| I'm not sure the experts have to be local. I can't be sure
| that a random twitter account isn't a bot, but I can be
| pretty sure that tweets from @nasa are reasonably
| trustworthy. People will form webs-of-trust: they trust one
| source, the people viewed as trustworthy by them, etc. Anyone
| outside of that will be untrustworthy.
|
| That's not too dissimilar from what we do today, after all
| people have always been able to lie. The problem is just that
| if you start trusting one wrong person this quickly sucks you
| into a world of misinformation.
|
| I find your point about regulating AI interesting. We already
| see some of this, with good recommendation systems being
| harmful to vulnerable people (and to a lesser degree most of
| us). This will probably explode once we get chatbots that can
| provide a strong personal connection, replacing real human
| relationships for people.
| freetinker wrote:
| HN is a good example (or precursor) of webs-of-trust. Nice
| phrase.
| mrshadowgoose wrote:
| The concerns in your second paragraph can be mostly mititgated
| using a combination of trusted timestamping, PKI,
| cryptographically chained logs and trusted hardware. Recordings
| from normal hardware will increasingly approach complete
| untrustworthiness as time goes on.
|
| The concerns raised in the first paragraph however... the next
| few decades are going to be a wild ride. Hopefully humanity
| eventually reaches an AI-supported utopic state where people
| can wrap themselves in their own realities, without it
| meaningfuly affecting anyone else. Perception of reality is
| already highly subjective, most of the fundamental issues are
| due to resource scarcity/inequality. Most other issues
| evaporate once that's solved.
| FastMonkey wrote:
| I think you can technically mitigate some concerns for the
| people who understand that, but practically it's going to be
| a very different story. People will believe who/what they
| believe, and an expert opinion on trustworthiness is unlikely
| to change that.
|
| I think being in the real world and meeting real people is
| the only way to create a real, functional society. Allowing
| people to drift away into their own AI supported worlds would
| eventually make cooperation very difficult. I think it would
| just accelerate the tendency we've seen with social media,
| creating ever more extreme positions and ideologies.
| thedorkknight wrote:
| I don't think it'll be way too much different than it has been
| for most of human history. We really only had a brief blip of
| having video, which was generally trustable, but keep in mind
| that before that for thousands of years it was just as hard to
| know truth.
|
| Someone told you stuff about the outside world, and you either
| had the skepticism to take it with a grain of salt, or you
| didn't.
| gernb wrote:
| I was listening to "This American Life" and they had a segment
| on someone who setup an site to give you a random number to
| call in Russia where you were supposed to give them info about
| what's happening in the Ukraine. It was someone shocking to
| hear their side of the story, that Russia is a hero for helping
| oppresed Russians in Ukraine.
|
| But then I stepped back and wondered, I'm assuming that the
| story I've been told is also 100% correct. What proof do I have
| it is? I get my news from sources who've been wrong before or
| who have a record of reporting only the official line. My gut
| still tells me the story I'm being told in the west is correct,
| but still, the bigger picture is how do I know who to trust?
|
| I see this all over the news. I think/assume the news I get
| about Ukraine in the west is correct but then I see so much
| spinning on every other topic that it's hard to know how much
| spinning is going on here too.
| RobertRoberts wrote:
| I was asked "What are we going to do about Ukraine!?" And I
| said, "It's a civil war that's been going on for almost 10
| years, what is different now?" and their response was "what?
| I'd never heard that." And I added, "In 2014 there was an
| overthrow of an elected president there and it started the
| war." Blank stares.
|
| I have a friend who traveled to Europe regularly for tech
| training, including Ukraine, and he was surprised about how
| little people know what is going because people's news
| sources are so limited. (mostly by choice I assume)
|
| No special tech needed to manipulate people, just lack of
| multiple information sources?
| mcphage wrote:
| > what is different now?
|
| Um. Is this a serious question?
| RobertRoberts wrote:
| Yes, I didn't know there was an invasion when asked about
| Ukraine, but I knew about the past history. (some at
| least)
| HyperSane wrote:
| The president that lost power in 2014, Viktor Yanukovych,
| was a Russian puppet who refused to sign the European
| Union-Ukraine Association Agreement in favor of closer ties
| to Russa. The Ukrainian parliament voted to remove him from
| office by 328 to 0. He then fled to Russia.
| synu wrote:
| It's hard to fathom believing there's nothing new or
| relevant happening with the 2022 invasion, or why if
| there was a lead-in to the conflict that would be on its
| own a reason to conclude that there's nothing to be done
| now.
| RobertRoberts wrote:
| See this is the problem. While I follow plenty of
| international news, I didn't know this.
|
| There is often times just too much to know to fully
| understand a situation. So how can anyone form a valid
| opinion?
|
| As a follow up, was the election of Victor Yanukovych
| lawful? If not, then why not point out he was a puppet
| from a manipulated election? That would be worth a coupe,
| but not because you disagree with his politics, that's
| just insanity. Look what Trump believed and supported, we
| didn't start civil war because Trump wouldn't sign a
| treaty and he was accused of being a Russian puppet too.
| There is just more to this story than you are letting on.
| idontwantthis wrote:
| This doesn't bother me that much because evidence isn't
| required to convince millions of people that a lie is true. We
| already know this. Why make fake evidence that could be
| debunked when you can just have no evidence instead?
| [deleted]
| fleetwoodsnack wrote:
| Different instruments can be used to capture different
| segments of the population. You're right there are gullible
| people who are more likely to believe things with limited or
| no evidence. But it isn't necessarily about the most
| impressionable people, nor is it about installing sincerely
| held beliefs.
|
| Instead, what may be a cause for concern is simply the
| installation of doubt in an otherwise reasonable person
| because of the perceived validity of contrived evidence. Not
| so much that it becomes a sincerely held belief, but but just
| enough that it paralyzes action and encourages indifference
| due to ambiguity.
| astrange wrote:
| This discussion isn't useful because you're assuming people
| actually care if something is true before they "believe"
| it, which they don't, so they don't need evidence.
| "Believing" doesn't even mean people actually hold beliefs.
| It means they're willing to agree with something in public,
| and that's just tribal affiliation.
| [deleted]
| fleetwoodsnack wrote:
| >you're assuming people actually care if something is
| true before they "believe" it, which they don't
|
| This seems like an assumption too. I know there are
| instances like you've described but they're not absolute
| nor universal and I accounted for that in my original
| comment.
| idontwantthis wrote:
| Think about how few people believe in Bigfoot when video,
| photographs, footprints, eye witness testimony all exist.
|
| Think about how many people believe in Jesus without any of
| that physical evidence.
|
| If anything, the physical evidence turns most people off.
| And I'd argue that most Bigfooters don't even believe in
| the physical evidence, but use it as a tool to hopelessly
| attempt to convince other people to believe in what they
| already believe is true.
| mgkimsal wrote:
| > You're right there are gullible people who are more
| likely to believe things with limited or no evidence
|
| Often the lack of evidence _is the proof_ of whatever is
| being peddled. "No evidence for $foo? OF COURSE NOT!
| Because 'they' scrubbed it so you wouldn't be any wiser!
| But _I_ have the 'truth' here... just sign up for my
| newsletter..."
| mc32 wrote:
| I think a bigger question is whether reputable sources --those
| people trust for whatever reason, would use this technology to
| prop up ideas and or to create narratives.
|
| I don't think it's far-fetched. We've already seen where videos
| are misattributed[1] to stoke fear or to promote narratives
| --by widely trusted news sources.
|
| [1] This was foreshadowed with "Wag the Dog" but happens often
| enough in the media today that I don't think use of "deepfake"
| technology is beyond the pale for any of them.
| ketralnis wrote:
| It almost doesn't matter now that people have fractured on
| which sources they consider reputable. Trump called a CNN
| reporter "fake news" and presumably his followers think of
| them the same way I think of Fox. I absolutely think that Fox
| would use technology to lie, and I'm sure Fox fans think that
| "the liberal media" would. So people are going to think that
| reporting is fake whether or not it is
| mc32 wrote:
| Wasn't 'The Ghost of Kiev' almost entirely fake but the
| news carried it as real?
| ketralnis wrote:
| I don't know. How would you "prove" it? Google it, and
| look for what other people that agree with you think?
| mc32 wrote:
| Well...
| https://www.washingtonpost.com/world/2022/05/01/ghost-of-
| kyi... admitted by Ukrainians themselves...
| ketralnis wrote:
| According to an article you found online. That's exactly
| my point, if we can't trust news sources then we can't
| really know anything. Because of my aforementioned
| distrust of Fox News, if it were written by them I'd
| dismiss that article out of hand placing no truth value
| on it either way. I'd expect somebody that distrusts CNN
| to do the same if it were written by them.
|
| "It's confirmed!", "They admitted it!", and other
| unprovable internet turns of phrase in comments sections
| are really just "I believe it because somebody I trust
| said so" and that only has weight as long as trust has
| weight.
| mc32 wrote:
| If the accused admit to something it's more believable
| than the alternative (that they were forced into false
| admission).
|
| So in this case if the Ukrainian government admit to
| making things up then I would think it's believable that
| they made something up for the sake of propaganda. We can
| also check more independent sources --read Japanese news,
| or Indian news sources, etc.
| ketralnis wrote:
| We don't know that the accused admitted to anything. We
| know that the Washington Post says that they did. The
| world becomes very solipsistic when you lose trust in
| reporting.
| GuB-42 wrote:
| There has never been a truth society.
|
| This tech will certainly be used to frame someone for a crime,
| like I am sure Photoshop was used in such a way, and thousands
| of other techniques. And modern technology offers counters. It
| is an arms race but because of the sheer amount of data that is
| collected, I think that truth is more accessible than ever. The
| more data you have, the harder it is to fake and keep
| consistent.
| fxtentacle wrote:
| "Cheerleader's mom created deepfake videos to allegedly
| harass her daughter's rivals"
|
| "The girl and her coaches were sent photos that appeared to
| depict her naked, drinking and smoking a vape pen, according
| to police"
|
| https://abcnews.go.com/US/cheerleaders-mom-created-
| deepfake-...
| jl6 wrote:
| I don't know, it seems like the existence of widespread, easy
| photo/video/audio faking technology could be a really strong
| argument for dismissing any purported photo/video/audio
| evidence.
|
| Wouldn't it be funny if deepfakes destroyed the blackmail
| industry?
| kleer001 wrote:
| Thankfully things that a real are very cheap to follow up on.
| Questioning some security footage? No worries there's 100+
| hours of it to cross check with the footage and three other
| cameras too.
|
| IMHO, consilience and parsimony will save us.
| VanillaCafe wrote:
| The real problem isn't the veracity of the information, but the
| consensus protocol we use to agree on what's true. Before the
| internet, we were more likely to debate with their neighbors to
| come to an understanding. Now, with the large bubbles we can
| find ourselves in, afforded by the internet social media, we
| can find a community to agree on anything, true or not. It's
| that lack of challenge that allows false information to
| flourish and is the real problem we need to solve.
| whimsicalism wrote:
| I would be curious if false information is actually more
| common now. It seems like people regularly believed all sorts
| of false things not too long ago.
| redox99 wrote:
| People with some degree of knowledge already know that any
| photo could be photoshopped. People that don't care will
| blindly trust a picture of someone with a quote or caption
| saying whatever, as long as it fits their narrative.
|
| This has been the case for photos for almost 2 decades. The
| fact that you can now do it with video or audio doesn't change
| that much IMO.
| hutrdvnj wrote:
| I think it does, because while you obviously couldn't trust
| images since two decades or so, you could resort to video
| which wasn't easy to believably deep fake until recently. But
| if everything online could be a deep fake, how can you find
| out the truth?
| whimsicalism wrote:
| Videos can be faked too, it is just cheaper now.
| makapuf wrote:
| It's called special fx, is more than a century old
| (people are now aware it's fake but remember the word is
| that the train coming in la ciotat movie made people run
| out of the movie theatre).
| micromacrofoot wrote:
| Photos have been altered for much longer than 2 decades.
| Think of airbrushing models in magazines (used to be literal
| airbrushes painting over photos). This has had a serious
| impact on our perception of beauty and reality.
| MarcoZavala wrote:
| WanderPanda wrote:
| I think society will adapt within 1 generation. The tech is
| already there (signing messages with asymmetric encryption)
| toss1 wrote:
| And how many use or will use the tech, and how many of those
| will use it competently, and how many of those are competent
| to validate and know that their checking technology has not
| been compromised (e.g., hacking or distributing bad
| authenticity checkers and/or certs like hacking or
| distributing bad crypto-wallets)?
| bee_rider wrote:
| End-to-end encryption was a giant pain in the butt that
| required dinking around with PGP or whatever, but now it is
| a pretty mainstream feature for chat apps (once they
| figured out how to monetize despite it). Tech takes a while
| to trickle down to mainstream applications, but it'll get
| there if the problem becomes well known enough.
| toss1 wrote:
| I agree that e2e encryption is becoming more widespread
| and "user friendly".
|
| However, the friendliness seems inversely proportional
| with the ability of the users to detect that their tool
| is failing/corrupted/hacked/etc. So while we might have
| more widespread tools, we also have a more widespread
| ability to give a _false_ sense of security.
| nathias wrote:
| you could never trust it, now you'll know you can't trust it
| EGreg wrote:
| Is this what Matterport 2 app uses?
|
| When you take many photos of a scene or indoor place and it's
| stitched together?
|
| Can this be used for metaverses?
|
| ALSO why not synchronize videos of the same event and make an
| animated 3d movie from 2d ones !!! Great as a "disney ride
| metaverse"
|
| Who is doing that in our space?
| getcrunk wrote:
| I just started playing cyberpunk 2077. Spoilers:
|
| The idea of the "black wall" to keep out bad ai comes to mind.
| Not arguing for it but just acknowledging that maybe one day will
| all have to live in walled gardens to stay safe from rouge ai's
| or rather rouge actors using powerful ai's
| henriquecm8 wrote:
| One thing I've always wondered about that, they explained where
| those are running, but how they still have autonomy? Are they
| producing they own power? How about when they need to replace
| hardware?
|
| I am not saying it's impossible, but I would like to see that
| part being explored, even if it's in the tie-ins comics.
| echelon wrote:
| What's the difference in NeRF from a classical photogrammetry
| pointcloud workflow? It seems like the representation and outputs
| are identical.
|
| Why would you prefer NeRF to photogrammetry? Or vice versa?
| randyrand wrote:
| Nerfs can represent reflection, speculars, refractions.
|
| They also are proving to be faster, more accurate, etc
|
| The input data is the same. Nerfs have the chance of requiring
| less input data.
| flor1s wrote:
| Neural Radiance Fields are a technique from the neural
| rendering research field, while photogrammetry is a research
| field on its own. However these are just turf wars and in
| practice there is a lot of overlap between both fields.
|
| For example, most NeRF implementations recommend the use of
| COLMAP (traditionally a photogrammetry tool) to obtain camera
| positions/rotations that are used alongside their images. So
| this multi-view stereo step is shared between both NeRF (except
| a few research works that also optimize for camera
| positions/rotations through a neural network) and
| photogrammetry.
|
| After the multi-view stereo step in NeRF you train a neural
| renderer, while in photogrammetry you would run a multi-view
| geometry step/package that uses more traditional optimization
| algorithms.
|
| The expected output of both techniques is slightly different.
| NeRF produces renderings and can optionally export a mesh
| (using the marching cubes algorithm). Photogrammetry produces
| meshes and in the process might render the scene for editting
| purposes.
| natly wrote:
| I was initially annoyed by this title but now I'm gonna switch my
| perspective to being happy that ideas like this are floating
| around since it acts as a really cheap signal to tell if someone
| knows what they're talking about or not when it comes to ML.
| nathias wrote:
| can't wait until deepfakes completely revolutionizes people's
| relation to information
___________________________________________________________________
(page generated 2022-06-27 23:01 UTC) |