Terrifying would be continuing to take everything we see as fact in 5 to 10 years. I think we are in a special time where if you see a video of pretty much anybody, you can usually tell right away if it's fake or not. Sometimes it's hard, but it can usually be debunked. Before that, you really couldn't fake any videos without it being obvious.
In 5 to 10 years, hopefully we will have learned to never ever ever take anything we see as fact, because we absolutely will not be able to distinguish rendered video from the real thing.
I used to be able to tell when there was CG in pretty much any movie but at some point it turned out that there were a lot more cases of CG in movies that I just didn't notice. I'm willing to bet that there are movies where you won't be able to tell if something is CG or not.
I recently found out that the majority of photos in an IKEA catalogue are in fact completely and entirely CG. Makes perfect sense of course, and the fact that they're doing it is entirely unsurprising.
What did surprise me was the quality of the renderings, according to this article, not even their own QA department can tell the difference between their photos and renderings any more: http://www.cgsociety.org/index.php/CGSFeatures/CGSFeatureSpe... (so they put their photographers on a 3D modelling course, and the 3D modellers on a photography course, to blur the distinction even further--really cool article, IMO).
True, but the downside of this is that anyone caught in the act will use this as a defense of first resort - it'll be the 2020's equivalent of 'my twitter account was hacked' until we establish some sort of reliable ELA tool to grade source material.
Indeed, as there are more and more cameras around (including autonomous ones of increasingly tiny size) imagery of the videographer will probably become a major authentication factor.
- We have mechanisms to prove that video is taken after a certain time (show a newspaper on the video, or get the video cryptographically timestamped via a trusted server).
- We have mechanisms to detect if something is taken before a certain time, by doing something interactive and unlikely with live viewers.
- We have mechanisms to detect if something has been modified from its original form (signing).
You might be able to make a CCD chip that signs every frame with a private key, and then ships the frame off to a public signing server too. Producing that CCD along with the video taken might be proof. But then you could defeat that with a display hooked to the camera, feeding the doctored image to the trusted camera.
I remember learning about Stalin's photo retouching, and reading about the systematized photo retouching and censorship in 1984. At the time I thought it was completely impractical, there's just too much work to do and not enough people to do it. Let's hear it for automation, putting the auto in Autocracy! :)
> You might be able to make a CCD chip that signs every frame with a private key, and then ships the frame off to a public signing server too. Producing that CCD along with the video taken might be proof. But then you could defeat that with a display hooked to the camera, feeding the doctored image to the trusted camera.
How does that defeat the cryptographic timestamping?
The goal is to have a video that can be used as evidence in court: what you're seeing actually happened. You can guarantee that the framebuffer recorded by the trusted camera is indeed an accurate recording of the photons that entered the lens.
But you still wouldn't be able to guarantee that that actually happened.. if we have enough compute and good enough algorithms, you could spoof a scene in real time on a small display strapped to the front of the trusted camera. Even though the trusted camera is recording what it sees, what it sees isn't really what is happening.
So we still don't get back to being able to use video as evidence.
I believe everything you have said is true, especially with cameras becoming smaller and more common.
However, I'll still be optimistic and hope that with the increasing number of cameras, people will be less likely to engage in activities where they shouldn't. The opposite argument would be that with the increasing technology to fake such an activity, the amount of 'my twitter account was hacked' incidents will rise.
But the "activities where they shouldn't" will be defined by the powerful. It's total state or corporate control, depending on your dystopian future preference.
The solution to this problem of pools of power probably lies in solving what is and isn't allowed using a continuous consensus based method. To enable that, we need realtime feeds of everything available, available to all, and some sort of filtering mechanism to deliver video to the right people for evaluating consensus on whatever happens that is contestable. Today's 'powerful' are only powerful because they control the information. Whatever gets built to improve upon society needs to ensure it can't be wielded for self gain or increased lever of power action.
Social regulation by Youtube commenters? What fresh hell is this?
Consensus is not sufficient. This is why rights-based approaches are developed. It has to be OK to live an unpopular lifestyle that's not harmful to others.
I agree, which is why I said "deliver video to the right people for evaluating consensus on whatever happens that is contestable". That gives you the right to live a way that is 'contestable' without having to pay a price for it (because you have a right to it).
We live in a majority reality. Enough force in the right place causes action.
I'm not suggesting a mechanism that operates on the 51% rule. I'm suggesting we 'elect' certain people to serve in roles that allow quick consensus to be formed when there is disagreement. Put those judgments in a safe place and you have 'laws' that can be referenced in the future.
I was thinking more along the lines of assaulting people or committing crimes. Things that could ruin your reputation or career that are generally disagreeable. I think what you are trying to say is that the powerful will determine what is right and what is wrong.
I can see this being used primarily against those in power though, so you aren't wrong.
Things that could ruin your reputation or career that are generally disagreeable.
A deeply closeted gay man (with a wife and kids) taking their first steps out of the closet and kissing a man could ruin his career with those things. Is that OK?
Another example of use of imagery for enforcing a minority's idea of "shouldn't": in response to Emma Watson's feminism, threats have been issued to leak nudes of her.
In a world where entirely realistic body images can be created of anyone, it becomes possible to threaten to "leak nudes" of anyone, even if they've never taken any.
> In a world where entirely realistic body images can be created of anyone, it becomes possible to threaten to "leak nudes" of anyone, even if they've never taken any.
It is also the world where nude photos of anyone have exactly zero value, because everyone who wants them can generate those themselves and there's no easy way to distinguish between real, doctored and fake ones.
It's terrifying to think that in the next 5 to 10 years we won't be able to distinguish a forged, high definition video of pretty much anybody.