Inside Las Meninas VR

What is it?

Las Meninas is a painting that was made by Diego Velazquez in 1656. It is considered by many to be one of the most important pieces in the realm of Art History.

Las Meninas - Wikipedia

One of the reasons for this is the mystery of how this painting may have been made. Velazquez is in the painting–the leftmost person, standing with a brush looking at the viewer. The prevailing idea so far has been that the painting was done from imagination.

We are recreating this painting in 3D and putting versions of the 3D model into Mozilla Hubs, which will allow people to view it in Desktop VR. We hope to try different configurations to see if, truly, the only way this painting could have been made was from imagination.

Who’s involved?

This project was brought to us by Professor Karol Kawiaka. Alaina Arnold, John Bell, and Kala Goyal have been the main people from our lab working to help implement it.

Reflection

Las Meninas is a painting that was created by Diego Velazquez in 1656. It is painted in an interesting way, showing the painter in the middle of the painting, seemingly working on a separate canvas, one that is facing away from us. Art historians have long questioned how the painting was made, and whether the mirror at the back of the painting is reflecting monarchs or the painting in the middle. We created a 3D representation of the painting, talking a lot of help from Luigi Cocchiarella’s When Image sets Reality Perspectival alchemy in Velazquez’s Las Meninas (link http://master.grad.hr/hdgg/kog_stranica/kog19/08kog19-Cocchiarella). We tried a couple solutions to allow people to move around inside the painting, in Virtual Reality, and eventually landed on using Mozilla Hubs, as it requires no installs or downloads and can be run on anything from mobile phones to computers to VR headsets of all kinds.

By allowing people into the painting, we hope to add more evidence to the idea that the mirror in the back of the painting is reflecting the canvas in the middle of the painting.

During the process of working on this project, we learned some important lessons about working on such projects, especially in the current time.

Limitations of Remote Work

Remote work is the new normal in a world impacted by COVID-19, and it certainly does not come without its limitations. While the world attempts to adjust to these new times, the socioeconomic and geographic factors that affect our access to reliable wifi and efficient internet speeds have become that much more apparent. 

Upon my return to my hometown in Texas, I realized just how unstable my internet connection is at home. Downloads that would take mere seconds on Dartmouth’s eduroam now took multiple excruciating minutes of waiting and staring at a screen when I could have been using my time more efficiently if not for this technological delay. Audio cutting out and extremely extended lagging have made meetings much more difficult, and there is almost always a fear that my delayed audio will unintentionally interrupt someone when they are speaking, due to it playing only after we have already moved on. 

Eventually I called my internet service provider and discovered that everything was outdated, and the plan my family was on only allowed for speeds up to 3 Mbps max. After having a technician come out to replace the outdated wiring and upgrading the plan, I now have an internet speed of up to 20 Mbps, the highest available in my area. This shows an example of a geographic factor limiting internet speed availability, and while new fiber optic internet is slowly but surely making its way into my hometown in the next year, I am not sure my family would be able to afford it, as the only reason we were able to upgrade in the first place is because our old plan was so outdated that, with an added discount, it ended up being cheaper to upgrade anyway.

Additionally, an inability to access machines from the Studio has also impacted how we have had to work. Using a personal machine for rendering or baking can be challenging when not knowing the capacity it has for such tasks. Older and cheaper machines can make for slower render times, and considering the laptop I started with at the beginning of quarantine soon began overheating even when doing a task as simple as browsing the web, I realized early on that it would certainly not be the best option for rendering much of anything.

Baking lighting, however, became essential to solving the issues of Hub’s limit on the number of lights per scene. To work around having to render on my personal machine, we planned that after I recreated the lighting for the scene in Maya and figured out the steps required to bake it,  I would send the files to Kala so that they could render them on a lab computer instead of my own. Unfortunately, the aforementioned internet speeds resulted in an upload to Dropbox that took almost 24 hours, and I ended up rendering almost all of the textures myself before the upload had finished. Thankfully, I had recently bought a new computer that allowed for decent render times, but while I had the means to invest in a new computer, that is likely not the case for a majority of remote workers. 

Because of the extremely slow upload speeds I found myself faced with, we also came up with a solution that involved using Gitlab, however, that ended up creating its own set of problems that we have yet to solve.

Learning from other disciplines

When trying to upload v2 of Las Meninas (the version with baked textures and a copy of the model to simulate a mirror), Mozilla Hubs didn’t allow us to publish our scene. Even if it had, the file size was around half a Gigabyte, with enough distinct textures that the scene would have been unfeasible to use for most. When we ran into this, I thought about video games and all the interesting things game developers (and other graphics devs) have historically done, and still do, to save on file size. We ended up putting textures together in a grid (and then downscaling the grid), so that we could use the same texture and material for four distinct objects.

Serendipity

After we applied the composite textures to objects, we noticed soft, sketchy, white outlines on a lot of meshes. This was both confusing and concerning, but we realized eventually that we liked the way it looked—it looked more like a piece of art than a stilted 3D reproduction. This was entirely accidental—at some point in the process of making the composite grids of images, everything that was transparent (places in the texture maps where there were no UVs) got turned into white, and, due to downscaling the pictures, the textures got a few pixels of white onto the borders of the UV meshes, giving the models a sketchy look.

Limitations of Current Technology

Reflective and Refractive surfaces are, depending on your rendering engine, either very simple and realistic (ray-tracing) or nearly impossible (OpenGL, any other rasterization)

After initially setting up the lighting, we discovered that Hubs limits the amount of lights per scene to allow for running on mobile and low end pcs, recommending using no more than three.

While we were able to reduce the number of lights to five and still publish the scene, we later ran into the issue for a second time once we realized our workaround for the mirror meant duplicating everything, including the number of lights. We decided that the best solution would be to bake the lighting instead. 

Despite the constraints, we discovered throughout the process that these restrictions actually helped make us make a better final product. The initial limit on the number of lights meant we had to get rid of superfluous ones as it forced us to focus on only the most essential lights for replicating the scene. Thoughtful placement of fewer lights ended up being ideal, even after baking freed us from the light limit altogether. In addition, baking the lighting allowed us to have a higher quality, more dynamic look that more accurately matches the lighting of the painting itself.

Importance of Support for Open Standard Files Types

GLTF is an open standard file format created by the Khronos Group, and is used as the default file format for some open source and free software. Mozilla Hubs, for instance, only accepts models in the GLTF file format. Blender, an open source 3D modelling software, allows saving in the GLTF file format, and has tools to do so in a more time and space efficient way. On the other hand, Autodesk Maya has no in-built software for this. We ended up saving in Autodesk’s FBX file format and using a tool to convert all the models to GLTF. That tool was created years ago by a Facebook team, and it was sheer luck that the tool still worked. Autodesk not supporting this open standard format incentivizes either sticking with only Autodesk and affiliated products (which is their goal), or only using Open Source software for the entire pipeline (which only hurts them). We are now considering learning Blender and switching to it at some point because we want to use more open standards. Open Standard File Types work best when industry tools choose to support them, but unfortunately it seems that industry tools will not support them until they become too big to ignore. 

Automation

When faced with file sizes that were too big and too many unique textures for Hubs to reasonably handle for different devices, the solution we came up with involved a lot of automation. From writing a script to automatically place the UVs into quadrants so that one texture could be used for four different objects in Maya, to auto-reducing file sizes and auto-converting to different image types using imagemagick, writing a script to auto-combine four images into one using Matlab, and using an old Facebook tool to auto-convert fbx files to glbs, we saved time by using these methods instead of moving UVs individually by hand or combining everything one-by-one in photoshop. 

While automation is extremely helpful, there comes a time when you have to ask yourself whether it will take more time to find a method of automation to use (and then get it to actually work) than it would take to just do it manually. Factors to consider in this case would be whether the automation method, be it a script you write or a tool you find, could be used again for another project later, and whether you might benefit from being able to sit back and let it run multiple times, as automation would likely be ideal in both of these cases.

While automation was the best way to combine the images, there were two instances in which we decided to modify something manually. The first was to change some of the file names since our decision to group the images alphabetically led to issues caused by differing standards of alphabetic sorting. Because there weren’t that many fixes, I decided to manually rename them instead.

The second time I modified something manually was after we switched to using a different version of the floor. When I reran the script using the new image, I found that the images in the other quadrants had become black-and-white, spotty versions of themselves. After trying to fix it, and realizing the new floor texture was likely different in some way that was causing the combining not to work, I decided to just do it manually in photoshop as it was only one file, and it was best to move on.

I used to lean more toward the side of just manually adjusting or converting due to uncertainty about how long it could take to find a different solution, especially when I knew a method that would work for sure, even if it were tedious. After writing the script to combine the images and going through all the other automated processes, however, I will now be looking for automated solutions more often.

Fixes can be weird

When working with the Las Meninas model in Unity, we ran into the issue of the window panes not letting the lighting through, leaving the inside of the model dark. After searching the internet for solutions and trying and failing at the ones it offered up, a chance attempt to solve the problem by changing the window panes from being set as static to not static ended up working, allowing the light to come in. Kala hypothesized that although setting objects to static is generally good because it allows baking and speeds up the process, it seems like because of raytracing, it was having a difficult time passing light through a “transparent” object since raytracing, when done with low settings, often has trouble getting light through refractive materials. When it was not static and therefore rendering in real-time, rasterization through OpenGL instead, it worked.

Written by Alaina Arnold and Kala Goyal