All secrets of creating a realtime architectural walkthrough in the exclusive interview with CG generalist and level designer Benoît Dereau
We, at 3D Architettura, could not help but notice the growing attention of 3d artists to realtime technologies. Although the request of the market for virtual reality and walkthroughs is also growing, this field still remains quite new and experimental, giving artists wide opportunities for their own research and development of their own know-how. Which software to choose? What workflow is better? Today we want to discuss all these questions with Benoît Dereau, the author of the famous Unreal Paris.
Unreal Paris by Benoît Dereau
What is your background and education? Why did you decide to work in CG?
I was about 14 when I had my first contact with 3D development tools based on video games engines. The publisher VALVe had made them available to the world in order to create additional content for their video games. (Counter-Strike, Half-Life, etc.). Since then, I have never stopped using these tools. I learned by myself the basics of level-design and content creation for video games.
I quickly made the choice to study 3D computer graphics. These studies allowed me to familiarize myself with software like Autodesk 3DS Max, Cinema 4D, Mental Ray and Vray. Throughout the years, I discovered other areas of the 3D world (which are not related to video games): animation, architectural visualization and packshots.
My first job was in the field of the video game industry, at Arkane Studios, in Lyon (Dishonored). I worked as a level architect. The missions of a level designer approaches those of a level architect, except that the latter focuses on the visual side.
Later, I left the video game industry to work mainly on Vray and Corona in the architectural visualization industry.
Why did you choose Unreal Engine for making your real-time walk-through?
What are the difficulties and “underwater stones” for a CG-artist when working with this software?
Keeping a close eye on the video game industry, I saw the technical development of their engines. When it finally released, the Cry Engine developed by Crytek allowed the public to get a relatively photorealistic rendering. However, it is the implementation of the PBR technology in Unreal Engine and my experience gained on this engine that motivated me to build scenes of architectural visualization on it.
This allowed me to combine video games and architectural visualization!
What is your usual workflow for the real-time walk-throughs?
In the static render you can work only on what you want to show, but with real-time you need to be ready to show whatever the client decides to see, – so how can one optimize the work and get the best result?
I do all the modeling on 3DS Max. My assets use the least possible amount of polygons. The instantiation of some objects is made directly into Unreal Engine 4 to reduce the resources needed to operate the scene. Then I do all the unfolding to make sure that my textures and lights are applied as efficiently as possible.
Once the scene implemented in Unreal Engine 4, I mainly use static lighting. This allows me to have a high accuracy on the light bounces. I have not yet apprehended real time global illumination, though I know Unreal Engine 4 is capable of it. I have a strong preference for natural lighting in my scenes. It allows me to sublimate the environment more easily. The calculation of light can be time-consuming. Indeed, each change to the environment requires you to recalculate the entire scene. That’s why It becomes interesting to get an idea of the result that your light could give before starting a calculation. This phase is always a big dilemma because each light calculation process can represent a substantial loss of time.
Then, I create a shader including all variables that might interest me like normal maps, or reflections and albedo (diffuse). The Unreal Engine 4 does not offer all-in-one shaders like Vray or Mental Ray with simple slots to fill and numbers to change. You have more control over your shaders. Procedural textures are barely used in real-time because it is very resource-intensive for the GPU (shaders must be recalculated for all frames).
There is also a real work on sound design for most scenes to make them more consistent. I do not do it on most of my projects. This is achieved by Maël Vignaux. I advise you to consult his portfolio
The last step is a simple packaging of the content for the required platform ! This is done automatically by the Unreal Engine 4 via the main menu.
Lucid Arch Dreams by Benoît Dereau
How do you deliver your walkthroughs to the client?
Is there a risk that the file is too huge or the clients computer is not strong enough to execute it?
And what about mobile devices?
I provide my client with an executable file that I previously compiled in order to function properly with its hardware. On each hardware platform, further optimization is required to overcome the different hardware and software restrictions.
We must constantly juggle between optimization and performance during the implementation of objects, textures, shaders and light. The goal is to get the most interesting result possible without the scene being too much demanding in resources and time.
Smartphones and tablets do not support ambient occlusion and real-time reflections. So I try to add the ambient occlusion effect during the calculation of the lighting in the scene.
With new technologies static renders not being anymore such a “wow”, as they were even 10 years ago, do you see any innovative approaches to make these static renders more appealing both to the artists and their customers?
I really think that customers are looking for more immersion, either through interaction or virtual reality helmets. We must find a way to be able to implement it in traditional rendering projects without drastically increasing the working time. Customizing the colors of the furniture with a button is an example of something that customers really like to have. I do not take too much risk with this answer because I really have no convincing solution to this issue.
Do you think that realtime technology will win over pre-rendered animation?
Are we facing the come-back of machinima as we recently experienced the come-back of stereoscopic devices?
It is clearly unlikely that the pre-rendering technologies will disappear soon. If you are seeking high-quality renders, pre-rendering engines have so far a clear advantage. There is no hardware problem with the customer, and you don’t have the currently existing real-time restrictions. Clearly the architectural visualization workflow is constantly evolving but the pre-rendering process has not proved inefficient yet.
However, the price will always be higher for real-time quality content. This is mainly on interactivity that real-time will have an advantage.