[HN Gopher] Nvidia's Crazy New Neural Engine Is Redefining Reali...
___________________________________________________________________
 
Nvidia's Crazy New Neural Engine Is Redefining Realism in Graphics
 
Author : behnamoh
Score  : 104 points
Date   : 2023-05-06 19:49 UTC (3 hours ago)
 
web link (www.youtube.com)
w3m dump (www.youtube.com)
 
| cs702 wrote:
| Holy mackarel! Those demos are impressive.
| 
| Sooner than you expect, games and virtual reality are going to
| look as good as movies -- scratch that, they are going to look
| _better than movies_.
| 
| We live in interesting times.
 
  | gumballindie wrote:
  | Mobile VR won't work as nicely tho, it will be tether VR that
  | stands a chance.
 
    | smoldesu wrote:
    | I don't see why they both can't exist. Mobile VR already
    | works great, and the power of mobile chipsets is only
    | increasing. Tethered VR will only dominate for as long as
    | mobile GPUs struggle to output stereo 2048x2048~ish video,
    | which... isn't going to be for long. Combined with SOTA
    | upscaling, we may already be there by some standards.
    | 
    | I say all this as someone with a Quest who tethers to a PC
    | for Beat Saber and Half Life Alyx. Tethered experiences rule
    | - but untethered ones are really not that far off.
 
      | gumballindie wrote:
      | > but untethered ones are really not that far off.
      | 
      | I am not sure how you measure this but you can't run proper
      | games on mobile vr. Mobile is limited, you can't fit in a
      | GPU the same size of a desktop GPU. John Carmack has an
      | interesting talk about what happens when mobile chips get
      | too small and crowded. We simply can't battle physics. It
      | would be awesome if we had the same experience tho.
 
        | smoldesu wrote:
        | Obviously the two will never coincide. Mobile GPUs _will_
        | eventually reach a  "good enough" stage though, and
        | arguably we're already there. The quality of Quest-native
        | games like Beat Saber is almost identical to the version
        | on PC. Older games like Resident Evil 4 play just like
        | normal. Visually it can be 'meh', but the tech is there
        | and the option to stream from a more powerful desktop
        | still exists. It uses comparatively weak chipsets to
        | deliver pretty-damn-good visuals at a price point lower
        | than most consoles.
        | 
        | I would argue that your thesis of "you can't run proper
        | games on mobile vr" is wrong. Today, you can go play DOOM
        | 3 or Half Life in VR, untethered, on a sub-$500 headset.
        | That should startle everyone working on tethered systems.
 
        | gumballindie wrote:
        | Interesting, I need to give it another go.
 
        | dharma1 wrote:
        | AirLink Wi-Fi stream with Quest is great, not much
        | different to tethered cable.
        | 
        | I think there is a case for local ML becoming more
        | popular too, I could see nvidia making a Shield like box
        | at some point with a mobile 4000 series GPU and good
        | thermals, that could bring those GPUs to mainstream
        | beyond hardcore gamers. It would work for gaming, VR and
        | consumer local ML apps (Siri that actually works and
        | doesn't leak data).
        | 
        | Maybe some day the latency/bandwidth will be good enough
        | to stream VR from edge servers so you don't even need a
        | local GPU. We're not there yet, even for non-VR games
        | 
        | I think we'll see new classes of games/entertainment too
        | where you'll just describe the (VR) experience you want
        | and ML constructs a game or (immersive) movie like
        | experience of it. Maybe a different one every day.
        | 
        | I can see Nvidia selling a lot more GPUs in the next
        | decade as ML and real-time 3D becomes much more
        | pervasive. At some point other much more power efficient
        | architectures (our brain uses 12 watts) will trump
        | general purpose GPUs
 
        | gumballindie wrote:
        | > , I could see nvidia making a Shield like box at some
        | point with a mobile 4000 series GPU and good thermals,
        | 
        | That is indeed how i see it working. A dedicated VR
        | "console" of sorts that tethers via wifi.
 
        | smoldesu wrote:
        | Personally, I never see something like this being made
        | (or at least marketed as such).
        | 
        |  _However,_
        | 
        | *pauses to put on tinfoil hat*
        | 
        | Nvidia does sell devkits that roughly match the compute
        | footprint you're describing[0]. They're ARM SOCs which
        | puts them at a disadvantage for gaming, but the form
        | factor does exist. If you need a lot of high-power AI
        | compute and are willing to tinker with it, you can't beat
        | CUDA on ARM.
        | 
        | Again though - there's a reason these are sold as devkits
        | and not products. Every YC-backed roach from here to
        | Mississippi is going to spend their next half-decade
        | trying to get your data/money for a machine learning
        | product. Nvidia knows it's a losing game to sell hardware
        | instead of services here, so they're arming the
        | entrepreneurs instead of the consumer. Frankly, I think
        | it's the right move anyways. People are going to need
        | scaling compute for decently fast AI inferencing in the
        | future, and Nvidia can keep that scaling curve under
        | their thumb. Hell, I wouldn't be surprised if there are
        | Nvidia execs suggesting that they abandon the gaming
        | market altogether just to focus on more lucrative
        | AI/datacenter customers.
        | 
        | [0] https://store.nvidia.com/en-
        | us/jetson/store/?page=1&limit=9&...
 
  | hanniabu wrote:
  | > they are going to look better than movies.
  | 
  | Movies will be made with this stuff. No expensive actors, sets,
  | or long editing times that come with traditional animations.
 
| HellDunkel wrote:
| As far as i understand the neural engine is trained on texture
| BRDFs which are too memory intensive for most rendering usecases
| (offline & realtime). The hierarchical model delivers sub-pixel
| accuracy outperforming mipmaps. So far so good but this is no
| replacement to tracing a ton of rays. Temporal stability and
| versatility is questionable. Will be interesting to see how this
| compares to Unreals Substrate.
 
| low_tech_love wrote:
| Cool, but I'm sure there must be some better link for this
| material. Quote from a YouTube comment: _Now if we could only get
| text to speech from "OK" to "blows you away", instead of "your
| drunk uncle talking from down an empty well"_
 
  | miohtama wrote:
  | I thougt AI today could do better than this. Sounds like 80s
  | speech synth. Even Dr Sbaitso was better.
 
  | boulos wrote:
  | Some of the results on https://rhasspy.github.io/piper-samples/
  | are super impressive (via https://www.home-
  | assistant.io/blog/2023/04/27/year-of-the-vo...).
 
  | etiam wrote:
  | I'm going to have to defer to Louis CK on that one
  | 
  | https://www.youtube.com/watch?v=PdFB7q89_3U
 
  | washadjeffmad wrote:
  | https://beta.elevenlabs.io/
  | 
  | I've been playing with TorToiSe and other emerging local
  | projects with voice cloning, but ElevenLabs is so far ahead
  | that it's the only one I've considered promoting to clients.
 
| WithinReason wrote:
| Very impressive results I would say. Link to paper:
| https://research.nvidia.com/labs/rtr/neural_appearance_model...
| Similar one, about texture compression:
| https://research.nvidia.com/publication/2023-08_random-acces...
 
  | etiam wrote:
  | Do we anyone here who feels qualified to comment on how much
  | more it would take to run the process backwards and de-render a
  | stream of general footage scenes to a compact, nearly lossless
  | format for transmission or storage?
 
  | andybak wrote:
  | Thank you for the non-video link. When I'm on HN it's because
  | I'm expressly in a reading mode. I do enjoy videos but not when
  | I'm browsing here.
 
  | ulnarkressty wrote:
  | > equal contribution, order determined by a rock-paper-scissors
  | tournament.
  | 
  | I hope to see more and more bizarre ways of picking the author
  | order on these kind of papers.
 
    | codetrotter wrote:
    | I want to see one where it says
    | 
    | > equal contribution, three of the authors are the real
    | authors, the other seven names were generated by an AI
 
      | TOMDM wrote:
      | Why?
 
| kasperni wrote:
| From the paper
| 
| --------------
| 
| Our system is running on Direct3D 12 using hardware-accelerated
| ray tracing through DirectX Raytracing (DXR). All results are
| generated on an NVIDIA GeForce RTX 4090 GPU at resolution 1920 x
| 1080
 
| andrewmcwatters wrote:
| I think graphics technology has already "arrived" for the most
| part. There's tons of stuff that surpasses what Crytek did that
| blew away the industry 16 years ago, but a lot of that wasn't
| just technology, it's that they had world-class artists.
| 
| If you go back to that game today, there's a lot you can nitpick,
| but I think the biggest problem that remains today is that it's
| still extremely labor intensive to make assets.
| 
| At Planimeter, I worry more about asset creation turnaround time
| than anything else. I worry about it more than the tech we write,
| I worry about it more than fiction writing, or audio engineering.
| Just nothing else compares.
| 
| It's the most expensive part of any game development pipeline,
| and yet industry-wide accessible photogrammetry is half-backed,
| and it also doesn't help hobbyists who do 2.5D or 2D work.
| 
| So yeah, this neural engine stuff is superphenominal, for people
| who care about PBR workflows and photorealistic pipelines. This
| is obviously the most computationally complex work that can be
| addressed today, anyway, which I appreciate.
| 
| But the people who can actually access and harness that tech is
| just so absolutely tiny. I guess I just care a lot about this one
| particular sector of the industry where you have these world-
| class bedroom professionals who become studio professionals and
| Unreal and Nvidia are probably the only orgs in the world who
| cater to them, but for some reason I have to put in a lot of
| effort to articulate why what we have today, despite being so
| much more powerful than what we've had 20 years ago is less
| accessible and less functional and less empowering than what we
| had then.
| 
| I think it's primarily the labor factor in artwork, but also
| accessible engine tech today is actually worse than what we were
| using then, simply because no one really uses the id Tech family
| of engines anymore besides the most modern incarnations of them,
| and even to this day no class of engine compares to old versions
| of id Tech. Not even Unreal, by a long shot, despite having
| industry leading rendering capabilities. Everything else about
| that engine is half-baked or unusable.
 
  | Animats wrote:
  | Good point.
  | 
  | I would have liked to see examples of human skin, and of cloth.
  | That's hard and important. Rendering the perfect cheese grater
  | and glazed pot is nice, but really, not that important.
  | 
  | Most of the hard problems in graphics today involve scale.
  | Epic's Nanite texture compression system is impressive, but
  | about 60% of the work has to be done in CPUs because GPUs don't
  | have the right stuff for it. And Nanite is still just for rigid
  | objects. Crowds of individually dressed people are still tough
  | to render fast.
  | 
  | NVidia put in ray-tracing hardware into GPUs. It's not used
  | much in games. NVidia was angry with reviewers who just ignored
  | their ray-tracing hardware in reviews. The reviewers were not
  | wrong.
  | 
  | And could we please have good hardware support for order-
  | independent translucency? That should be standard. Then we can
  | get rid of depth sorting of faces, which never works right.
 
  | mmcconnell1618 wrote:
  | I wonder how much AI models can help. If they are trained on
  | enough real world content, they should be able to produce rough
  | models based on descriptions: "Generate a tropical island about
  | 5km in width by 10km length with an active volcano in the
  | center" and then then the artists adjust/massage as needed.
 
    | T-A wrote:
    | https://analyticsindiamag.com/raja-koduri-gives-a-sneak-
    | peek...
 
| kman82 wrote:
| The day vr looks like avatar is the day I buy a vr headset
 
| [deleted]
 
| xwdv wrote:
| This is the climax of 3D graphics rendering!!
 
| est wrote:
| https://research.nvidia.com/labs/par/Perfusion/
| 
| this gets less attraction, but I think it could be the next hit
| after LoRA.
 
| joering2 wrote:
| If you see progress of graphics design in the last 40 years, do
| you have any idea what would be considered a "progress of
| graphics design" in the year 2100 ?
| 
| I mean seriously, it looks like in the next maybe 5-15 years, GPU
| will be able to render graphics that even another GPU wouldn't be
| able to distinguish whether its real or fake.
 
| clnq wrote:
| Finally -- a use for the Rockwell Retro Encabulator.
 
  | whyenot wrote:
  | I still don't understand what the function is of the sinusoidal
  | dingle arm.
 
  | eljost wrote:
  | The audio also reminded me of the Fallout games.
 
___________________________________________________________________
(page generated 2023-05-06 23:00 UTC)