In writing this article, I would like to offer a number of useful, easy-to-apply tips to help you significantly speed rendering, while avoiding certain common pitfalls. Though the analysis builds on processes as they occur in V-Ray, some points are common to all rendering programs and are therefore applicable in the general sense.
One mistake I frequently encounter is that draft render quality is not set with the specific job in mind – e.g. is left at a particular level by habit – and multiple sequential test renders are conducted at a quality higher than necessary.
Before rendering, think through what information you hope to extract from the test and adjust the sampling level accordingly. If, for instance, you wish to examine the effects of natural lighting, then a high-noise image will do just fine. If, on the other hand, it is the grain of the woodwork that interests you, then you will obviously be forced to enhance the quality level.
Glass both refracts, and reflects light. Though transparent, in reality, it does not permit 100% of the light that hits it to pass through. On a rendered image, however, and particularly where the glass in question is a simple pane, little of this effect will be visible. I recommend excluding all glass panes both from global illumination (in V-Ray, this option is found in the parameters menu), and from shadow-casting.
This can even be done with minor props found in the background of an image. If, for example, one has a restaurant with some glasses lying around on tables, it is highly unlikely that the results of this simplification will be noticeable, but it will prevent the buckets in question from getting bogged down in those particular computations.
This method is particularly useful in improving render times when GI/refractive caustics and affect shadows for the material of the glass are both switched on.
It is usually better if bitmap filtering is switched off. In this way, V-Ray, rather than 3ds Max, performs the filtering, resulting in a sharper, more detailed image and, in most cases, shorter texture calculations and therefore, shorter render times. This difference will be particularly conspicuous for Opacity Maps.
(Sometimes, turning off filtering will produce strange effects, especially where bump and displacement mapping are used. Where this is the case, it is a good idea to switch it back on and increase sharpness by selecting lower blur values, instead.)
It sometimes happens, especially with high quality models imported from purchased sets, that an accessory incorporates materials that are radically overcomplicated for an object of that size and significance. This is not necessarily a problem, but if you notice your buckets slowing or stalling at these points, it is expedient to review the models in question and, where possible, simplify.
It may be unnecessary, for example, to apply the SSS effect to a tiny houseplant. You may also wish to set reflections to specularity only. Such measures produce visible changes only very infrequently, but may have a considerable effect on render times.
In addition, you might review your own materials for unnecessarily complex solutions that are good candidates for simplification. If, for example, you have a large number of materials masked over one another using V-Ray Blend, it can sometimes slow rendering spectacularly.
It may seem self-evident, but in the heat of the job, this point is one that is easy to forget. Before initiating rendering, it is an excellent idea to hide as many things not needed by a given camera as possible. I recommend configuring layers separately, indicating which cameras do not require them on each, and hiding all that are unnecessary. A classic example would be an interior with several rooms, where each room is rendered individually.
If a given light source – such as a strip light – is small by nature, then achieving the desired intensity will require increasing the multiplier. In this situation, it could easily happen that the scene will contain a large number of tiny, high-intensity light sources, which can significantly slow computation.
In such a case, it is a good idea to contemplate increasing the size of the source slightly, while simultaneously reducing intensity, in order to optimise computing time.
If, for example, you use a disk light as a spot with a high direction value, the size of the diameter – 2 centimetres or 5 – matters. Where there is no perceptible difference, a larger size is better for small intensities. (Lights that are too large can also cause problems, but in this case, I wanted only to point out the expedience of avoiding lights that are too small and too intense.)
Beyond lights of exaggerated intensity, your scenes may also suffer if the total number of light sources is too high. Though the situation has improved greatly since the advent of probabilistic lighting, to avoid a hangup in computations in older versions, it is worth limiting the number of lights to as few as are absolutely necessary. Of course, as the number of sources to be included in a particular scene is more or less fixed, there are no miracles to be worked here; at the same time, a few basic simplifications are usually possible.
For a series of individual fluorescent tubes, for instance, it matters little from a visual standpoint if the solution is replaced with a single V-Ray light. Or where a lighting fixture consisting of a multitude of tiny bulbs is used, you might consider having an equivalently scaled V-Ray sphere fulfil the same purpose.
Though there is no doubt that Distributed Render speeds the rendering process, prior to creating a draft, you should think through carefully whether it is really worth using it. If the aim is to render the entire image, and the rendering can begin promptly on all nodes, then, naturally, DR will get the job done quicker.
If, however, you are testing just one small region (even one that requires nearly all the resources of the given work station), then not only is the use of DR unnecessary, but given the time it takes to save and load the scene, it will also slow the test. In such cases, do not forget to turn it off.
If a scene is to be tested using more than one camera and everything within the scene is set for that to happen, then it is a good idea to send out each camera view as a batch render – using Backburner (or any render job manager) – so that machines will compute them individually.
Though this will not optimise the render itself, it will reduce time spent in front of the computer by eliminating the need of sitting at a work station, waiting for each frame to complete.
Because the rendering process images what we model, it is important that we think through precisely where and to what level of detail the things we work on will appear. Models of things that are small or that appear in the background should not be overly detailed, not only because of the time spent working on them, but also because excessive detail will encumber the rendering process.
This is particularly true if you use Turbosmooth where it might otherwise be avoided (e.g. using autosmooth), or where you wish to apply displacement to a feature it would, on the whole, take less work to model (keeping in mind that the opposite could also be the case).