But new with the update is virtual lighting, to better highlight you in your video.
Some of these new features call for powerful GPUs.
That said, I was able to run both features on an RTX 4060 mobile GPU.
For one thing, they may really be as demanding as Nvidia says.
My laptops fans even ramped up as if I were gaming at full throttle.
So just from an economics standpoint, these features are costly no matter how you look at them.
Youll need to have powerful hardware to run them, and then run that hardware hard.
Plan on using these features on a desktop computer or with your laptop plugged in.
Then theres the even more crucial matter of how they really look and sound.
Lets start with video.
The eye contact tool, despite being available before Broadcast 2.0, has now come out of beta.
But Im not convinced it should have.
Sure, enabling it makes it look like Im staring into the camera in video footage.
For reference, I do not have blue eyes.
The virtual key light did what it said.
It created artificial lighting to boost brightness on me without bumping up the brightness on the whole video.
The results failed to impress me, though.
With it enabled, I simply look like Ive gone radioactive.
The lighting is very unnatural.
As for the audio, at first blush, it sounds fairly impressive.
The mics on my laptop are not very good.
Even in a quiet room, they put out audio that has me sounding far away and slightly muffled.
With studio voice enabled, my voice ends up much fuller and clearer sounding.
But listening closely, theres an odd digitization going on.
Its hard to characterize, but it doesnt sound like its really my voice.
Its all just a little stilted and quavering.
Listen below:
The studio voice feature also cant save the mic from a bad recording environment.
If you have a half-decent microphone, studio voice might even make it worse.
Can Nvidia AI replace a proper streaming setup?
If you have an Nvidia-powered system, by all means, play with the tool.
Some of the features can come in handy, like the auto-framing one.