Based on your guidelines and supplied materials, we create a 3D models and we adapt them to your needs.

Depending on the model destination, your preferences and requirements, the model can be made using various techniques, including "traditional" mesh modeling or sculpting. Sculpted high poly mesh can be later remodeled to low poly and the details from high poly model will be baked as a normal texture to that model.

The next step is to make surface materials for prepared 3D model. They can be made as a procedural shaders, without needs of UV mesh unwrapping, or prepared as a PBR textures, eg. in the Substance Painter with the distribution of UV texture mapping method.

PBR materials are mainly composed of several textures which are responsible for different properties of the object surface, like: surface color (albedo map), surface irregularities (bump mapping, a normal map), reflections metalness (metalness), surface roughness (roughness), way of light reflectance (specularity), light emission (emission), translucency (translucency), transparency (transparency).

After preparing the models, it's time to get on with preparing the scene. One of the most important things here is good lighting. To achieve nice natural lighting and photorealistic results, we use HDRI Environment Maps. These textures are characterized by wide tonal range and contains information on the light intensity. That gives a very natural lighting of the scene, soft shadows and nice reflections in the objects.

Later, according to the principles of good composition, we set up the camera and optimize the scene for rendering. After rendering, if it is needed, we make color correction, exposure, contrast and saturation.

We have possibility to generate spherical images 360o. That gives you the impression of being in the middle of the generated image. Currently, services such as YouTube and Facebook offers support for this type of images, by the moves of your mouse or smartphone, you can look around in the still image or video clip.

Spherical rendering can be done as well as a stereo image, that is supported by the device VR, such as the Oculus Rift, Samsung Gear VR, HTC Vive or Google Cardboard with smartphone.



In addition to the static projects we do animations as well. Depending on your needs, we can make flight through camera animations over the static scene, animate objects and characters which are the subject of the animation and also we can combine the video clip with 3D scene. We can also perform interactive presentations, eg. in the Unreal Engine 4. These animations you can use for product presentations, advertising, presentation of technological processes, manuals and many other purposes. If necessary, we are able to bake the lightmaps in order to optimize the animation scenes.

In order to set into motion the character provided to us or created by us, we create the skeleton (rigg) and do animations. The animations can be done manually using sequence of keyframes or recorded as a motion capture.

Currently we can see very dynamic grow of virtual reality (VR) market as a branch of 3D graphics industry. Using this technology we can prepare interactive presentations dedicated to VR tools. They have unlimited use, especially for investors who would like to present their products to the potential customers, even though they don't have the product yet. Such an interactive presentations are very useful especially for real estate developers, architects, interior designers and investors conducting projects of investments that has visual impact on the landscape. With the engine Unreal Engine 4, we are able to prepare interactive presentation and deliver to you as application with the stereoscopic image that you can use on the VR device.



Using "camera mapping" / "camera projection" technique we can give a depth to flat photography. Projecting image from camera view to the prepared 3D scene we are able to make it and make it three-dimensional.



It is possible to obtain 3D models using a sequence of the images, that are taken from many different angles. Based on these images, cloud of points is created and then software generates 3D object with texture. Later we must retopologize that 3d mesh.



Compositing static image or movie clip is combining together separate elements and layers of image in postproduction process in a way that final result gives impression that all of these elements are parts of one and the same image and scene.

In order to isolate the characters and objects from an image, they should be filmed on a solid background, usually green or blue (so-called. "blue screen" or "green screen"). Then using "chroma keying" technique the background of the image can be removed and replaced by transparent background. This allows us to move the image on background that is recorded in a different environment (e.g. prepared in 3D software). In order to prepare background of the extensive scene, artists often use matte painting technique.

In order to combine film and 3D scene properly, we need to transfer the movement of real camera to that virtual camera located in the 3D software. For this purpose we use "camera tracking" technique. Here we must set up special markers and based on them, software calculates movement of the virtual camera. Similar to "camera tracking" is "object tracking" technique when principles are the same but instead of tracking camera movement, software tracks the movement of 3d object.



We have in our offer also montage of video clips, including color correction. We can add there visual effects as well.