Quality of Life
Bringing Infinite Possibilities of Expression with a New Production Scheme for Visual Contents—The Future of Image Expression Envisioned by Nikon Creates
Nikon has not only developed imaging products like cameras, but it has also long contributed to the development of image expression, such as by creating environments where creators can realize the full potential of their imagination and individuality.
In order to further enhance the potential of image expression and venture into the next generation of visual content, Nikon established the video production company, Nikon Creates.
This article shines a spotlight on some of the initiatives conducted by Nikon Creates. The Vision 2030 magazine’s Editing Team visits the newly opened Heiwajima Stage—a facility combining numerous state-of-the-art technologies, established in Heiwajima, Tokyo—to take a closer look into the future of image expression as envisioned by Nikon Creates.
Volumetric Studio: A Production Site for Next-Generation Free-Viewpoint 3D Imagery
Our first visit at the imaging complex Heiwajima Stage: Volumetric Studio. “Volumetric” refers to a technology that reproduces 3D spaces from photographed imagery. The Volumetric Studio is designed with over 100 cameras that wrap 360 degrees around the studio to create the next-generation, free-viewpoint 3D volumetric video system, “POLYMOTION STAGE.” This technology allows for the production of a 3D video data that captures the full 360-degree view of the studio. Some cameras also include motion capture technology.
Cameras aren’t the only thing included in the 360-degree equipment—lighting is also directed to capture every angle. Representative Director and President Kazuhiro Hirano explains that this is vital to prevent casting any shadows on the subject.
“Shadows cannot be removed from the subject in the post-editing process. In order to use the resulting 3D video data freely, it’s better for the data to be as flat as possible. This is why lighting is pointed towards all directions.”
In the case of athletes such as baseball and soccer players, this can also enable more detailed analysis from various angles to better understand and utilize small movements.
“The Volumetric Studio dramatically changes the production scheme of videos,” Hirano-san adds. We were introduced an example of a 360-degree 3D video data created at Volumetric Studio.
“One good example is the case of MVs for artists. Formerly, say, when creating an MV for a band with four members, the director would first plan out camerawork and final image beforehand, to which the band members would be shot together until they could all perform as directed. Then, the captured video would be finished in editing. In other words, the majority of the production scheme involves prep work. This completely differs from the workflow using Volumetric Studio. Here, we shoot performances of each member individually, and relay that data to the director, who can then freely adjust its angle, size, or position—since the unique feature of 3D video is that it has a free viewpoint. And with a 360-degree image, there’s no worry of camera staff accidentally entering the shot. This all means that multiple versions can be created from just one shoot. In a recent case study of an MV, we shot members individually at different shoots, and the product was finished without them having to meet at all.”
Virtual Production×BOLT: Delivering Immersive Imagery that Replicate In-Person Experiences from Studios
Next, we discussed virtual production and how it affects the potential of image expression. Virtual production is a technology to realize imagery so immersive, it feels as though the viewer is actually there in real life. This becomes possible through projection of imagery on four high-resolution LED walls: one behind, two on each side, and one above. The back LED panel is slightly curved, to which Hirano-san explains, “By creating a modest curve rather than a flat surface, we can reproduce more ‘real’ angles when taking shots up close.”
At the time of our interview, we witnessed a shoot of Hirano-san in the driver’s seat of a car, resulting in a video of him seemingly driving around town. By projecting videos on the high-res LED walls around him, viewers see natural reflections on the body of the car and real-like expressions of light.
With a projection of images exactly as envisioned, virtual production makes it possible to create visual expressions of all kinds. One merit of using virtual production in combination with real scenery is that shooting is possible for as long as desired without worry of the weather or time.
“An example is the brief time before sunrise or after sunset, commonly referred to as ‘magic hour.’ If the shoot is done on the scene, there’s very little margin for redos. However, with virtual production, all you need is a pre-shot scene of magic hour to shoot the subject as many times as desired.”
Another example that capitalizes on the strengths of virtual production is shoots that may put the subject in dangerous situations. Or perhaps scenes that are physically impossible—say a scene of the subject teleporting from Japan to abroad—can be shot with virtual production simply by switching the background.
Then how about in the case of CG visuals? How does combining virtual production with CG differ from full CG production? On this and on the purpose of shooting with virtual production, Hirano-san answers:
“Combining real subjects with CG visuals requires modeling the subject with CG software, which can be quite difficult. However, with virtual production, immersive videos are possible simply by preparing a background image.”
Another unique feature is the “in-camera VFX,” Hirano-san explains. In-camera VFX is a technology that links the background image with camerawork. “In real-life shoots, when the camera moves along with the subject, the background changes as well. With in-camera VFX, this background shift can be replicated similarly to in-person situations. This means there is no need to recreate background images, while also still being able to create imagery similar to real life. To achieve even more ‘real’ visuals, virtual production technology can be combined with Bolt, the high-speed small cinema robot offered by Nikon’s group company and British robotic solutions provider, Mark Roberts Motion Control. In fact, the driving scene we saw during this interview was shot in combination with BOLT.
The main feature of BOLT is its high-speed movement at a rate unachievable by humans. Another is its ability to repeat the exact same motions over and over again, explains Hirano-san.
“BOLT can shoot at angles that are near impossible stance-wise with a handheld camera—say in the instance of shooting in automobiles, it’s possible to pan from the back towards the front in one smooth motion.”
By combining virtual production and BOLT, the possibilities of imagery shot in-studio expand significantly. Hirano-san adds, “To pursue real-like visuals, we need to design accordingly on the real side.”
“Just now, I acted out motions like turning the steering wheel along with the motions in background video or making certain eye movements. Perhaps blowing wind from the outside to sway my hair would have been effective as well. Another idea would be to physically shake the car to portray the car rocking, or create a background beforehand to make it seems as though the car is moving. We ourselves are in the process of searching for techniques to achieve more real-looking visuals, which I believe makes the process that much more fun and worthwhile for creators.”
Then exactly what kind of videos can be created with the technologies introduced so far, namely the Volumetric Studio, virtual production, and BOLT?
Nikon Creates has uploaded works created through combining these video technologies—more recently, they produced an MV for Def Tech, which garnered much attention.
The video can be viewed below:
Aiming Towards an Infrastructure Company of “Spatial Videos”
After a tour of the facilities at the imaging complex Heiwajima Stage, we were joined by Hirano-san and Takuma-san of Nikon Creates to hear more about their work.
―Please tell us about your thoughts on how to achieve new image expression.
Hirano: Even before the establishment of Nikon Creates, we were working on 3D development under the surface. Perhaps we were able to smoothly enter the 3D field due to our large involvement and background in imagery in the form of photography. As we ventured into the spatial video area, we were fortunate to possess a substantial amount of prior knowledge. I believe that is our biggest differentiator among other companies.
And one example of that is that we have a clear policy on “what makes excellent imagery.” Nikon has always conducted development with a particular focus on achieving natural visuals, and I believe that position is applied in our pursuit of new expressions in spatial videos.
Takuma: The image expression we introduced today are being used in MVs and commercials. In the future, they are planned to also be utilized in dramas and movies
―Today, there are many new technologies that may be used to create new image expression, such as MR and the metaverse. Why is there a need for technologies like MR and the metaverse, and what do you believe is their future outlook?
Hirano: With regards to the metaverse, I believe it will be largely defined as “a virtual space that allows for communication.” Looking back at our history, communication modes evolved in phases, beginning with transmissions exclusive to text such as letters, then later to transmissions of sound such as phones, and finally, transmission of images and videos. Beyond this is the transmission of “space,” and this is where I believe the metaverse and MR will become vital as a part of our evolution.
However, spatial technology has not yet reached its full potential, and there are still instances of subtle lags or motion sickness. All the while, the technology is still advancing, and these issues are likely to be solved. Most notably, the gaming industry is leading the way in adopting these technologies. It’s already achieved a world where users interact among each other while playing. At Nikon Creates, we aim to contribute to spaces, which lie beyond imagery, and offer our strengths to create an infrastructure for imagery.
―What is your vision for Nikon Creates in the future of visual contents? Please also share what kind of initiatives you plan on conducting moving forward.
Hirano: I believe both Volumetric Studio and virtual production possess the potential to completely renew former imagery production schemes. Or rather, it could be said that we cannot reach the true form of new video production unless we change the former scheme.
We plan to provide opportunities for young creators who aspire to work in the world of video production to experience our facilities, so that we can see the birth of new ideas and practices unlike anything before. I hope that through this, we can encourage and support creators through any challenges they may face.
Takuma: Unfortunately, in terms of new image expression, the U.S. and South Korea are currently more advanced than Japan. This is likely due to a difference in culture, as other countries are likely to invest more towards new challenging opportunities. For Japan to catch up to—and even surpass—the rest of the world, it’s necessary to work together with other companies and boost industry standards as a whole, rather than working alone.
For example, one issue we are facing is in the area of background imagery used at Volumetric Studio. I believe it can be effective to collaborate with other companies to prepare simple contents in tandem with licensing to a certain degree.
Hirano: We hope to become a company that offers infrastructure services for spatial videos by 2030. In order to achieve this, we will continue initiatives to create a new market of spatial videos within the visual contents industry.
*Title and work duties are those at the time of interview