Everything you need to know about compositing and its future
How do actors really work? They work with green costumes and screens for all their acrobatics, and in movies, we can see them as cities in the skies, monsters, and fighting skills. And, compositing is one of the most crucial activities that can make this magic happen. The best part is that it is not limited to adding some special effects in the movie, this can also be used in game design. So, if you want to know about compositing, then you have come to the right place. Here, you will get to know everything about compositing, keep reading!
What is compositing all about?
When you merge visual elements from various sources into a single image, the process is known as compositing. The process adds some elements and effects in the scene, and changing the background can be a great example of the same. When we talk about movie creation, you need to move physical objects with 3D graphics, (also known as CGI- computer-generated imagery), and then it is done with the help of video editing software.
Background matching is yet another popular compositing method. In VFX compositing software, the artist tells a specific color according to the visual components that need to be modified. With the help of the application, all the pixels off a green screen are replaced, with corresponding pixels from various other digital sources.
As a viewer, you will see a picture and all the parts of which are taken from another image or video, you can think about a weather channel reporting the one that may have temperature maps as a background. So, compositing has become a necessity for different kinds of movies, video games, and animations.
Let’s take you to the beginning
The concept of compositing is there since the beginning of filmmaking, the techniques used in the past are surely more primitive as compared to what experts are using today. Here are three key examples that can provide you with the right idea about the early history of compositing.
The Four Heads of Méliès
A French illusionist and filmmaker, Georges Méliès brought some revolutionary changes in the early cinemas. He used multiple exposure techniques in his film Un Homme de têtes (1898), and he figured out the matte effect. They enabled him to put three severed heads on the scene, and also communicate with them.
Matte paints by Norman O. Dawn
Norman O. Dawn came up with the method that enabled blending and paintings, this allowed us to create some pictures for the film- Matte! He drew over the glass that was kept in the photo, and all those damaged buildings looked okay on tape.
Sodium-vapor Lighting in Yellow
It was time to capture light on film, and this is where low-pressure sodium vapor lamps came into play. The two black and white film spools were recorded with the help of a specialized camera, one of them was for the actors while the other one was for the background. All these processes have now been digitalized.
Over the past few years, digital compositing has seen amazing changes. Even before Méliès and Chomón, bringing some impossible realities was a big challenge to face. Where basics have remained the same, the technology has changed drastically from analogical to digital sources, for instance, overlapping a wide range of elements to show them as a part of a single shot. Bringing a sense of reality in compositing was tricky at times, thanks to the advancements in the field of technology, various tools have been built that can be used by artists to get some impossible jobs done.
Grading, rotoscoping, and keying are surely the key tools that are used by every compositor. And, 3D tracking is also a key tool that can completely change the way visual effects were done. So, what’s there in the future? Are there some new revolutionary tools on the way?
Where is it going?
Here, we’re going to make some predictions, and some of them can be true while others may be predictions only, we will come to know them as time passes. There are various important factors that are making compositing evolve at a rapid pace, a network that shares the tools that are developed by people and are available without any cost, some amazing capabilities of the render engines, real-time 3D environment integration, and a lot more. But, there are surely going to be some important points for the compositing workflow of the future.
The two main goals of research and development at the moment are speed and efficiency. According to Moore’s law, it shows how the capabilities of transistors have increased over time. This will enable developers to come up with more powerful software solutions, and they can make use of both CPU and GPU capabilities. Most tasks that are currently handled by artists will be automated as more speed means more complex algorithms. This will include some of the most time-consuming tasks done by artists at the moment, tracking and rotoscoping are a couple of them. Moving ahead in the future, smart compositing software can be predicted. Experts suggest that this can be related to AI or deep learning. And, they suggest that this is surely going to be a huge revolution, especially for big studios that are capable of affording one of them.
When we go around 5 years back, this was impossible for even the fastest of computers, but quick renders already exist, the proof of which can be seen in the latest graphic achievements and video games. We are now able to see some of the amazing advancements in compositing programs in some plugins including Element 3D, and this has surely been proven to be a revolutionary tool for motion design artists. The technology is very latest and after a few years of research, it is expected to bring a lot of changes in the VFX industry.
The actual idea behind this particular feature is very simple, eliminating the intermediate steps between compositing and 3D department. It will enable shots to jump back and forth as and when the director wants to make some changes all without having the need to re-render the complete sequence. Yes, such type of GPU workflow won’t be utilized by large industries for at least 20 years, until it reaches the same level of photorealism as raytracing.
Improving what already exists
Everything you have read so far is all about improving what already exists. From more power and speed to more resolution, but will there be anything new that we may not be aware of? Here are some hints-
Depth compositing and depth cameras
Probably not the newest technology, the one that is around for a couple of years now. But, this technology has not been used as a compositing tool. All cameras make use of RGB channels to capture images, and the idea here is to add the fourth channel to the image, channel Z, everything with stands for more depth. The way this channel is captured is completely depended on the technology that is being used, but this includes light field sensor or infrared. This may not be clear whether the video will be exported as a new RGBZ format or it will be exported as a separate shot.