0%
Frequently Asked Questions

Augmented reality enriches the visible picture of the world with information taken from the context. AR (Augmented Reality) technologies affect the user experience by lowering the cost of interaction, minimizing cognitive load and increasing attention. Contextual computing allows us to better assess current events and feel the environment, becoming a tool for expanding our senses, immersive environments created with the help of technologies, contributes to a closer connection between a person and the surrounding infosphere.

Between the real world, existing in reality, and the virtual world, completely simulated on a computer, there is a real-virtual continuum: environments with varying degrees of immersiveness. The property of the technological part of the environment, reflecting its ability to involve the subject in the system of relations, determined by the content of the environment, is called immersiveness.

Augmented reality does not change the human vision of the surrounding world and its perception, but only supplements the real world with artificial elements. Behind augmented reality is augmented virtuality – this is virtual reality in which objects from the real world are present. Augmented virtuality is that part of the reality-virtuality continuum that encompasses all variants and compositions of real and virtual objects. The concept of “augmented virtuality” refers to a virtual space in which physical elements, objects or people are integrated, with the ability to interact with the virtual world in real time.
One of the striking examples of augmented virtuality is the Holoportation project, a new 3D capture technology proposed by Microsoft, which allows you to create 3D models of people, compress them and transmit them anywhere in the world in real time. Using augmented reality glasses such as HoloLens, users can interact with each other in three dimensions, as if they are nearby in real space. Communication at a distance becomes as real as a face-to-face conversation. Thus, the augmented virtuality is able to connect the communication participants distant from each other in one virtual space, simulating a real meeting.

Interaction with all of the above environments is impossible without special devices: helmets, goggles, monitors, that is, we need technical intermediaries for immersion, therefore, all of the above is part of a computer-mediated reality.
Augmented reality claims to be one of the key elements of the infosphere, the latter determines the variety of areas for using augmented reality: education, medicine, tourism, advertising, entertainment, design, construction.

Education
Augmented reality can be used in the study of any subject, be it physics or history, biology or literature. For example, DAQRI’s Anatomy 4D AR application is an interactive virtual visual aid for learning anatomy

Construction and architecture
With the help of augmented reality, architects and designers can demonstrate design solutions to customers exactly as they would appear in the intended environment. It is possible to model pedestrian and transport logistics when designing a block. An example is the application of the project “Ancient Cities 3D”, which will allow you to see amazing 3D models of entire cities – both ancient and modern

Medicine
Today, augmented reality is used in operations to remove liver cancerous tumors. This was first implemented in 2012 in Hamburg, Germany. Before the operation, the liver was scanned, and a 3D model of the vessels and tumor was made. Then doctors used the iPad to see the tumor and all the vessels of the liver, now special glasses have been released for doctors that allow you to see the generated 3D models.

Entertainment and tourism
Geoinformation augmented reality systems allow you to easily find the necessary object in the city, get acquainted with it remotely and make a route to it. In the figure you see how objects are displayed within a radius of 10 km from the tourist’s location on the smartphone screen, information about the nearest points of interest. The same navigation can be created indoors

Design, advertising, retail
Augmented reality is widely used in design and advertising. The picture shows Heinz’s virtual recipe book and AR Door’s virtual fitting room. Dutch artist and software developer Richard Vijgen has created an application that represents wireless signals around us in the form of 3D images or sounds – you can see and hear the infosphere around us (video)

WebAR is an Augmented Reality experience that is accessed via a web browser instead of an app. It provides this web-based AR experience by using technologies like WebGL, WebRTC, WebVR and APIs. In simple words, you just need a phone to access it.

WebAR enables smartphone users to discover AR technology in the easiest way— via the web— without the burden of installation. It simply breaks barriers and creates interactive 3D models one can access through QR-code or a link.  Also, WebAR supports image target detection.

WebAR as AR, in general, can truly give amazing experience for its users but there are a lot of aspects that should be thought through during its development. To be functional and user-friendly WebAR should work in the following way.
Firstly, it should be defined where a device is in the 3D space – its position and orientation. This step is needed to sync the 3D image on top of the real world. This process can sometimes be referred to as six degrees of freedom which means the ability to track three axes of position and three axes of orientation. Secondly, the camera stream, its field of view and perspective of the camera need to be exposed. This feature is also needed for the synchronization of the virtual and real world.

Thirdly, to run AR in the Web version without flaws, the feature of scene understanding should be included, meaning the ability of a device to find the surface to put the 3D object on and to estimate the light in the environment.

WebAR is a part of the immersive web and, although it doesn’t need an app, it still has some technical requirements. First of all, your smartphone must have sensors like gyroscope accelerometer and RGB camera – things that most modern smartphones are equipped with – to run WebAR pages. Moreover, your browser should support WebXR, an API that allows users to view AR/VR content without installing extra plugins or software, and have AR Core installed (for Android devices).

For the iOS devices users, Apple developed AR Quick Look, an extension that enables users to use ARKit on the web. This grants quick and easy access to AR via the web – AR Quick Look uses models in USDZ format. Then, after an AR image is displayed on the screen, the AR experience is accessed within just one tap. It works in the Safari browser and built-in applications like mail, notes, and messages and allows you to see high-quality 3D objects.
USDZ format was created by Apple together with Pixar Animation Studio and allowed developers to create 3D models for AR. It’s an extension from USD format that combines several objects, like images and text, and render them into one. USDZ itself is unencrypted zip archive and can be created through Apple’s Python-based tools. It includes also a converter to transform other files formats like .fbx, .abc, .gltf, .obj into USDZ.

Viewing 3D models in AR on mobile devices requires relatively recent hardware and software.

iOS: iPhone 7 and newer or iPad 5 and newer, running iOS 12+
Android: Devices with ARCore 1.9 support on Android 8+

Answer is: glTF for Android and USDZ for iOs devices.

glTF is a 3D file format maintained by the Khronos Group. It is an all-purpose transmission format, but it has been adopted by Google as the format of choice for Augmented Reality (AR) on Android’s Scene Viewer.

USDZ is a 3D file format created by Pixar. It has been adopted by Apple as their format for AR applications on iOS AR Quick Look.

 

How does it work?

Original 3D data (meshes, objects, animations, etc.) combined with all the material information in 3D Settings, and converted to the glTF format using PBR materials and textures. Then, that glTF file converted into a USDZ file with as much of the same data as possible.

glTF Limitations

Some shaders and features are not supported by the glTF specification, so they will not be exported.

Refraction, Dithered, and Additive transparency (will be converted to Blending)
Cavity
Bump map
Clear Coat
Anisotropy
Subsurface Scattering / Translucency
Displacement
Lights
Post-processing Filters
Annotations
Sound
Some features are supported by the glTF format itself, but not by certain engines like Unity or Three.js (e.g. https://gltf-viewer.donmccurdy.com/ ).

Multiple UVs – some engines may force every channel to use the same Texture Coordinates
Maximum number vertex attributes per mesh is often set to 16, which can be especially problematic for morph target animations

USDZ Limitations

Since the USDZ file is generated based on the glTF file, anything unsupported by glTF will not be supported by USDZ. The USDZ format and Apple’s AR Quick Look have additional limitations.

Here are some key features that are not supported by USDZ / AR Quick Look. For a more exhaustive list, please visit Google’s USD from glTF compatibility documentation.

Point Clouds are not available in USDZ.
Quick Look does not have a Shadeless mode, so 3D scans and other models set to Shadeless may look darker than expected. A workaround could be to duplicate the base color texture in the emission channel.
Vertex Colors are not supported, so models that depend on Vertex Colors will not look correct.
Morph animations are not supported.
Only one animation track is supported.
Animation loop modes are not supported. Animations will loop endlessly in Quick Look.
Multiple UV channels are not supported. All textures will be mapped using the first UV channel.
Double sided rendering is not supported, so all models are rendered single sided. This is not an issue for models that are closed volumes, but it will not work well when models have planes or faces that can be visible from both sides.

 

Performance:

Here are some recommendations for model performance:

As few materials as possible
50 geometries/meshes or less
10 textures or less, especially when they are 4k+
500,000 polygons or less
Fewer than 35 bones per geometry
Low scene complexity
Avoid expensive transparency methods
Avoid lights casting shadows when attached to the camera
Consider using Shadeless mode when appropriate

Apple also offers the following USDZ-specific performance considerations:

Textures: models with large textures or many textures will not load. Downsizing textures may make them work, but the actual limitation is unknown for now. Apple recommends textures no bigger than 2048×2048 pixels for best performance.
Vertex/Polygon count: some models cannot be loaded because they have too many vertices/faces. The limit is not clearly defined yet, but 500k seems to be a good rule of thumb. Apple recommends no more than 100k for the best performance.
Animation: Apple recommends animations no longer than 10 seconds.

3D models are the foundation for any AR/VR experience. There are three leading processes for developing them: photogrammetry, 3D scanning, and 3D modeling; all have pros and cons based on product attributes, retailer needs, and budgets.

Photogrammetry combines many overlapping pictures (10–250+) of the product from every angle, and then digitally reconstructs and computes the data to make one model. This works well on most products but currently has challenges with metal or glass objects that reflect or absorb light. This process can be cost-effective depending on size and complexity (for instance, professional scans are available under US$400 for a footwear product representation)—but the resulting model is less flexible for reuse than models made by an artist.

3D scanning scans objects’ shape and measurements with millions of data points to create a dense vector point cloud. Geometry and shape can be determined but textures, colors, and lighting might not be represented that well in a photo-realistic environment. Future advancement in scanning technology with multiple scans of light, lasers, and infrared point clouds will combine with photogrammetry textures for more accurate color product representations.

3D modeling refers to the manual creation of a model by a 3D artist. This is usually the highest quality, but it can be time-consuming and expensive (for instance, costs frequently exceed US$750 for a footwear product representation).

Let’s work together to build something great.

Contact Us