Speaker 1: The Pixel 8 has long been known for having one of the best cameras for stills, but the Pixel 8 series focuses on video for the first time. I’m here in the Google Real World test lab. To find out what the Pixel camera has to offer, we’re going to give you a special first look. Trust me, when you think of labs, you probably don’t think about this. Google built these spaces to replicate the places you would actually take photos and videos.There’s a cafe, a living room, and some secret areas that are off-limits. [00:00:30] To show you. Control is the key to testing in this lab, rather than an on-campus cafe. All lighting elements in each room can be operated individually, from color temperature to intensity. The lighting grid on the ceiling recreates every situation from evening light to sunrise. The controlled environment also helps engineers test the same situations over and over again to ensure the phone provides consistent results. This custom built phone rig is used for side-by-side testing and comparison. We’re looking at how Google tested the pixel. [00:01:00] A new low-light video enhancement called Night Sight Video. Speaker 2: So we introduced the original Night Sight feature many years ago to take ultra-low light photos, but it was always difficult to bring it to video. Processing a 12-megapixel photo is different than processing a video at over 200 megapixels per second. . But with Night Sight Video, you can take all the HDR and Night Sight photo processing and bring it to your video without any compromises.Speaker 1: This is true. [00:01:30] It’s like an everyday cafe, but when the lights go down and the test begins, what exactly are you looking for?Speaker 2: What are you looking for?Simple everyday things like lighted dinners and romantic evenings at RU The scene turned out to be very difficult for the camera. Because here he says two people are sitting. Well, the camera may have to decide which one to focus on. Perhaps one of them is not facing the camera, but is close, and the other is facing the camera, but is far away.A candle is a very small dot [00:02:00] A bright, incredibly bright light, and worse than that, it moves and casts different shadows across the room depending on the movement. Also, you need to make sure that the flickering of the candle does not cause the flickering of the exposure. You need to instill confidence in the camera so that it knows what’s going on around it and is very smooth and controllable. Therefore, tests where the face is far away and backlit can put the camera in a very difficult situation. [00:02:30] Saturday night party, but a real test for the cameras. Speaker 1: Processing his 62nd 4K video at 30 frames per second is the equivalent of processing 1,800 photos, so processing these low-light videos requires a lot of computing power. Therefore, the video boost file is sent to the cloud. Video Boost uses the same HDR Plus algorithm used for still images to adjust the dynamic range of color and detail. This works with daytime and low-light videos, which Google calls night sight videos.This is the front [00:03:00] Shows the same clip with video boost turned on and turned off. Night Sight has currently been on the Pixel 3 for stills since 2018, but bringing it to video took that long, and the technology has inspired other phone manufacturers to roll out their own Night Modes. He is one. Most phones currently render colors well in daylight, but in low light the camera can get very confusing, especially if there are random textures in the scene. And words like proprietary board, Speaker 2: Cameras have something called a bearer filter on top of the sensor. [00:03:30] It helps distinguish colors. However, converting a Bayer filter to an RGB image in low light requires a bit of combination, rendering, and algorithm. That’s very difficult. For example, rounding errors can cause a green to not look like the correct green. In such a scene, there are pinks, oranges, greens, yellows, and multiple types of greens. Humanized animals are very good at finding different shades of green and making sure everything looks just right, even in low light. [00:04:00] By doing so, you can maintain the vivid color saturation that is characteristic of Nightside. Speaker 1: You’re looking at all these test images and test files, but is it what you think is correct, what the human eye would perceive, or is it an algorithm? What determines what is right and what is wrong? Speaker 2: So you always have to start with a trained eye and your own experience of, “What do I want the camera to do in these situations?” What if this was my home and family? Repeat the same test over and over again to determine what the correct answer is and make sure the camera works properly. [00:04:30] It’s consistent. In this board game, there’s actually a color chart next to it, and the color chart is calibrated so you know exactly what the correct color is. We know what they should be, but just generating the correct image doesn’t mean it’s always correct. There is always a difference between how we remember a moment and how we want to remember it. Perhaps the color chart does. And there needs to be balance. Speaker 1: Of course, we’re not all just indoors taking photos and videos. Google’s tests also take place outdoors. [00:05:00] This comparison shows the difference in autofocus testing and tone mapping between the two phones when the subject is moving. Speaker 2: Technically speaking, this is the human eye. Imagine being able to see an incredible amount of so-called dynamic range, from very deep shadow details to very bright things like the sun shining through a window. You can see many things with your eyes. Cameras can see much less than that. And the formats we use to send images, such as JPEG, have even less of it. [00:05:30] If the real world is this big and the format is this big, you need a way to compress this much into this much. Tone mapping is the way to do that. With compression and very good tone mapping, you can do a lot of compression while still looking as natural as you see it. Speaker 1: So what happens to your video without proper audio testing? 1, 2, 3. The usual way to adjust and improve audio is to tune frequencies. But if you try to remove sounds like wind, it can also make your audio sound bad because it’s a low frequency.Speaker 3: So there you have it. [00:06:00] Introducing the voice enhancement feature. Use AI. There are trained AI models that are very good at identifying sounds that are speech. So once you identify that audio, you can save that audio part of the audio and reduce the non-PE part. Speaker 4: Tests 1, 2, 3. Voice announcements are off. Speaker 1: Yeah, it’s pretty loud. Speaker 4: 1, 2, 3. Test, 1, 2, 3. This has voice announcements turned on. [00:06:30] Let’s see if you can hear the difference. Speaker 1: I can definitely hear that. That’s impressive. Again, it doesn’t sound like you’re in a recording studio or super isolated. It’s still true to where you were. Speaker 3: Yes, that’s right. Speaker 1: Having a controlled space, like a real-world test lab, is not only important on the software side. In fact, it’s also useful for hardware development. Speaker 2: When you build hardware, you have to make sure that it works well week in, week out, through all the different prototypes and factory versions that you get. [00:07:00] Once you do this, you can’t expect it to work forever. In autofocus, some lenses are stationary and others move. Slide back and forth. And in a case like this, you might have a situation where your phone is like this and the lens is sitting back and you flip the lens up to focus. Let’s say you take it off the table and go to shoot a video, and as the lens moves the focus to where you want it, all the grease on the rail gets pulled back.So it’s like pressing the lens with grease, which may give you different results [00:07:30] After the grease has spread across the rail, the behavior on the first focus will be higher than the chances of getting it on subsequent focuses. Speaker 1: What makes a Pixel really good video quality? What are the criteria that must be met before it’s ready to ship? Speaker 2: You can’t have static scenes in your video, but you don’t want to throw things like exposure and focus back and forth. We want them to have confidence and stability. Therefore, many challenges in such places are very useful, since the lighting conditions can also be changed.we [00:08:00] You can change the scene with controlled waveforms and ensure the camera is locked at the right exposure with the right focus and white balance. Therefore, we call it temporal consistency. Change over time is very important and having a controlled scenario like this to do it with is very helpful. Speaker 1: Learn more about Pixel cameras and how everything is tested. Check out this cool Real World Lab article by my friend Patrick Holland. It’s currently on CNET and mentioned in its description. Of course, click like and subscribe for more content. Please let me know what other content you have. [00:08:30] Techniques to look out for in the next detailed video. see you.