Graphics

Storytelling through Rendering

Pensa and the new art of explaining products

Any one with new ideas has the same essential problem:  convincing other people that they’re great ideas.

A single picture can sometimes convey the essence of a product design, but more often new ideas need multiple images to tell the whole story.  A visual product narrative traces the thought process that went into development through a series of renderings.

Like other forms of storytelling, there’s an art to good product narratives.  Some examples can be found at the website of the New York industrial design consultancy Pensa.  Rather than showcasing single shots of finished designs for such household brands such OXO, Bic, and Samsung, Pensa presents a series of slides that show why the innovations make for better products.

Here’s a few tips for clear and persuasive product narratives:

1. Get real

There’s a good reason firms are getting more creative with photo-real rendering output:  Now there’s a lot more of it.

“A few years ago, the processing time used to be a limiting factor because if it took four hours per image, then forget it,” explains Pensa principal and co-founder Marco Perry.

“To show 16 to 20 concepts or multiple views would take forever.  With the new technology, anybody in the office can literally just drop the CAD in, take a snapshot, change the view, and then grab another one.  It makes rendering a non-event.  As a result, we now generate a lot more images and have higher quality presentations.”

Pensa employs an application called KeyShot by software maker Luxion, considered the pioneer in the new high-speed automated rendering.  Designers can generate entire photo realistic scene in a matter of seconds.

KeyShot’s high-resolution imagery provides the viewer more details than hand-produced sketches and generally leaves less open to interpretation in client discussions.

2. Place the object in context

Showing your product in its intended setting can give an immediate sense of what it does and how it’s used.

CGI applications now make it easy to render the product seamlessly into a photographic back plate with the same lighting effects.

An alternative method is to model just a few familiar 3D objects that subtly suggest the environment.  In Pensa’s renderings for packaging on Mr. Longarm’s stain applications, the model of the fence provides enough context without the distraction of too much background.

3. Demonstrate user actions

ethernet_pensa_keyshot4

Innovation in products means users will do things differently. Where appropriate, include the user interaction with the proposed product.

In Pensa’s presentations, this is achieved with just a simple outline of a hand gesture or a standing figure.

The loose representational drawings on top of renderings convey the scale of products and their ergonomics while keeping  the spotlight on the product image.

4. Use Narrative Economy

charging_shelf_pensa_keyshot2

Anytime you set up a rendering view or storyboard sequence you have to consider narrative economy.  A scriptwriter’s term, it represents the strategy of communicating more important story details in a shorter time interval.

Consider Pensa’s proposal for DC+/ethernet shelf.  Other objects — a ring of keys and a couple smart phones — communicate the functional features (it cordlessly charges phones, has hooks).  The doorknob behind it gives several contextual clues economically.  It establishes the shelf’s height without having to show the entire wall.  The viewer also gets a quick impression of what kind of wall it is, and therefore what kind of room it is.

The combination of these minimal elements makes the viewer imagine the whole scenario – the shelf is where you throw the contents of your pockets when you enter a home or office.

5. Speak to a wide audience

oxo_keyshot_exploded3

Consider your audience might get larger than the few industry representatives you are talking with now. The narrative might soon make its way to specialists in other departments –  from engineering to retail sales –  who’ll have their own set of questions.

Good imagery travels fast in large organizations, Perry says.

Make your narrative clear enough for even the layman can understand to allow newcomers into the conversation.  And include enough description to anticipate the concerns of both the manufacturer and the marketer.

Changing majors

Perry says everyone on his staff can render their own work, which makes visual storytelling a group effort.

“Rendering in the past required you to have the skills of both a photographer and a computer expert,” says Perry. “Because it’s now so fast and easy to make images, it removes the bottleneck that comes from having one particular staff member who is an expert at rendering and lighting schemes.”

Automated rendering may have eliminated the need for the equivalent of a scientific degree in order to get realistic shots.   At the same time, the technology may be introducing a requirement for a more literary-minded skill set in designers  – borrowed from cinematic direction, graphic design, and comic book art – to order to spin the most dramatic and impactful stories of their ideas.

See more of Pensa’s recent product narratives.
or browse through some other creative renderings.

Written by Brett Duesing, Obleo Design Media.  A version of this article appeared in DEVELOP3D Winter 2011.


Photographing the Impossible

Women are from Venus -- Raygun Studio practices the art of deception using new, more intuitive 3D software. Stunningly realistic and provocative details has put the image shop in the forefront of a new wave of advertising imagery. Credit: Michael Tompert/Cade Martin

Raygun Studio unwraps the mystery of fantastic CGI effects

“If God is in the details, we all must on some deep level believe that the truth is there, too,” writes novelist Francine Prose.  Good storytellers, she says, know that it takes just one vivid detail to make the tallest tale come across as a truthful account.  Once the fisherman describes the hook caught in the bloody gills, we are somehow more apt to believe the big fish story.

The same holds true with images.   Michael Tompert, Designer-in-Chief at the Palo Alto-based Raygun Studio, has made a career out of presenting the impossible, straight-faced, as photorealistic truth.   We know that carnivorous chocolate, luminescent lather, and winged whales don’t exist, but Tompert’s convincing details require us take a second look.

“In a way I’m like a photographer,” says Tompert, who considers himself an image artist. “I build images that look like reality.  The content might have fantastic elements, which in many cases are client requests. If the client wants a Venus flytrap made entirely of chocolate, I want the chocolate to look so real you can taste it.”

Tompert started his own company after several years retouching product images for Apple and using 3D tools in the design of many of the icons, logos, and concept visualizations.  Now in his own shop for design and advertising imagery, Tompert has wide berth to play in his signature style.

Techniques at Raygun triangulate traditional photography, digital retouching, and computer-generated imagery (CGI) from 3D models.  For many years, CGI technology seemed impractical for photographers and graphic designers.  But Tompert explains that CGI tricks are now within reach, even for Mac-adherents like himself.

3D gets un-PC

A new generation of fast microprocessors and intuitive rendering software has made this kind of fictional photography possible.  One of the leading developers, Luxion, has released its high-speed rendering application KeyShot not only for PCs but for Macs, finally bridging the gap between CGI in the world of graphic artists.

Tompert has worked with 3D rendering since the 90s, gaining some experience in car ads and other product shots.  “Automotive styling is where 3D CGI originated,” he says.  “Photographing cars can get extremely involved.”  By taking the same CAD used to design and manufacture a vehicle, graphic artists could render it with realistic materials and composite it into any scene.  They could plop SUVs onto desert plateaus or park luxury cars on European cobblestones, without the immense effort of on-location photo shoots.

The problem in the past, Tompert says, was that the 3D CAD and best rendering technology resided almost exclusively on the PC side of the fence.  As a consequence, the software demanded a highly technical “engineer-y” mentality that one has come to associate with PCs.  You would have to mix your own material palette by tweaking dozens of optical properties. Setting up the lighting environment required the same sort of math-heavy pre-planning.

Taking his best shot -- Raygun Studio's Michael Tompert at a shoot, mounting his Nikon D300 to a Kaidan spherical pano tripod head for a HDRI. Photo by Montse Llaurado

The worst part of the older, slower rendering apps was you couldn’t see what you were creating.  Previews often gave only a solid-color fill of surfaces.  To see the actual interplay of light, reflections, and shadows – the convincing details — you had to process a final render, which sometimes took hours.  It was a bit like planning a fireworks display when all the rocket labels were in Chinese.  You never knew what you were going to get until you fired it off.

For many intuition-based visual artists, this amount of techie tedium was more than they could bear.   Tompert, however, soldiered through the technical demands for years, collecting a wide variety of 3D Mac tools, like Strata 3D, Amapi, Poser, Modo, and Cinema 4D.  Luxion technology, however, offered something new in its application Keyshot.  Tompert was one of the application’s earliest adapters.

“KeyShot provides an interface worthy of the name Mac OS.   I think it will open up CGI to a much larger audience in the photography and graphic art world, including people that were initially turned off and were unwilling to jump the hurdles.”

Re-touch & go

Incorporating 3D CGI details into an image works similarly to composting one photo into another in Photoshop.  In the title example featured here, Tompert makes a paintbrush swipe in Photoshop of his background image of the on-location beach scene, making an empty space in the waves in roughly the position he wants the flying saucer.  He imports in the preview image from the Bunkspeed application from the backend, just as if it was another Photoshop document.  The UFO model appears within the stroke.  He can go back and forth between the two applications to perfect the positioning and light effects.

“The real breakthrough with the Luxion technology is that it is real time,” says Tompert. “This allows me to work truly visually,  play with the scene, experiment, and find things accidentally.”  In contrast to renderers past, the KeyShot preview generates the entire scene just as it will appear in the final, including all the light sources, high-fidelity materials, and every tiny detail of reflections, refractions, and shadows.

If Tompert rotates the UFO model slightly, the KeyShot preview immediately re-processes the scene, and the new results appear in the PhotoShop brushstroke.  Like a rapidly developing Poloroid, the preview starts out fuzzy at first, but within a few seconds a tinny spacecraft gains clarity.

One advantage comping with a 3D model rather than a static image is that the objects in the KeyShot scene are movable and voluble in real time.  Tompert can tilt, rotate, or re-size the 3D model until the prop is naturally poised in the Photoshop composite.  KeyShot automatically recalculates the lighting effects in the new preview.

“Luxion KeyShot is basically a photo studio in a box. You have lights, a stage, and bring in your props.  You can save your stage and open it up a month later and the lights are still on, the scene exactly the way you left it, without any dust gathering,” explains Tompert.  “Another difference is you can change the materials of your props.”  If Tompert clicks on another material from the library palette, the preview would start again, giving him a UFO in glass instead of metal within a few seconds.

“I just love how I can be creative and spontaneous in KeyShot with 3D models.  I can dial in the lighting and materials in real time.  I can see the reflections and refractions run up and down the models in the real environment just like in a photo studio, rather than having to pre-plan or calculate the shot.”

A real gem -- To produce the luminous effect in Gillette's shower gel ad, Raygun Studio rendered a 3D model using Bunkspeed SHOT's diamond material. The high level of light scattering gives the image radiance as well as depth.

Materially different

The Luxion KeyShot application has taken off with the graphic art crowd (besides releasing a parallel product for Mac users) because that the software comes standard with an extensive library of pre-mixed shaders that perfectly match real-world materials:  woods, metals, enamel finishes, even translucent substances like glass or liquids.  Artist can jump right into rendering models in a sort of paint-by-numbers simplicity, without all the technical complexities.

Tompert often likes to trade out different materials to make unconventional choices to see he gets better light performance in a scene.  In Raygun Studio’s recent ad image for Gillette’s new shower gel, Tompert rendered a 3D decoy of the showering man in solid diamond to produce exaggerated light scattering.   “You can push beyond the limits of a real-world photo studio.  You can make glass that sparkles more than diamond, metals that reflect more than chrome.”

Setting up -- The original HDRI backplate used in the UFO crash image above.

A higher range of possibilities

“Since I work with KeyShot frequently now, I have been shooting more of my own HDRIs especially on bigger productions.  Doing my own HDRIs not only makes for incredible realism but turns out to be a great way to connect with everybody on the creative team at the day of the shoot,” says Tompert.  “It’s a way to span the real working world of photographer to the virtual world of the computer.”

Tompert initially relied on the library of HDR studio and on-location images that come along with Luxion’s software.  Now with equipment to create his own, he has no limits to where he can apply CGI.

A HDRI is a high dynamic range image, which picks up many more intensities in an environment that conventional photography.  A 3D HDRI and the background 2D photograph ideally are taken at the same time to capture the same play of light.  But in practice, one can also improvise.  In the spacewoman image, the photographer Cade Martin created the backplate image of the fashion model in Washington.  Weeks later, Tompert took a 360-degree HDRI at the beach in California which approximated the original scene.

“HDRIs capture a sphere that includes the whole dynamic range of radiance, from the flares on the sun to the tread underneath a rear tire,” he explains.  “Shooting your own HDRIs requires a bit of specialized hardware, whether it’s a DSLR with a fisheye or a Spheron scanning camera.  Then you need quite a bit of post-processing, either in Photoshop or in specialized tools for stitching and blending.  It’s not for the faint of heart, but it allows you to add any item imaginable into a photographed scene. It’s a good idea to work with available HDR images for a while and study how they work, or tweak existing HDRIs in Photoshop to get an understanding of how they affect a scene before going out and investing in equipment and software to shoot your own.”

Creating a splash

The next frontier for Raygun Studio CGI experiments is modeling realistic fluids.  Water – whether it occurs in rain-drenched streets, mountain cascades, or just in a pitcher on a table – has traditionally been a trouble spot for photo retouchers, since it is nearly impossible to remove water elements from their original context in with 2D tools.  Tompert has found ray tracing a way to sidestep this problem.   Now, with the addition of other 3D software tools, he has been pushing liquids closer to center stage.

“I am very interested in physics modeling which is coming along lately as the Macs are getting powerful, or should I say, almost powerful enough, to allow this kind of thing.”  Tompert will generate 3D splashes, spills, or pours in applications like RealFlow or MoGraph in Cinema 4D.  “It makes for some very organic and amazingly realistic models which open up other possibility for CGI beyond the usual product shots. Most of the things I wind up using in my final images are not really not planned out; camera views just ‘spoke’ to me and I would take screen shot of them and just drop them into Photoshop and finish it off.”

Pacific Heights -- A collaboration with San Francisco photographer Erik Almås, who shot the girl and the skyline, while Raygun Studio contributed the rest from rendered CGI models. Local shop Jellysquare performed the post-production and compositing.

A realistic outlook for Mac users

The advertising industry have acquired a taste for CGI-composite photography, and according to Tompert, its appetite continues to grow.  At first, CGI was a way to create less expensive glamor shots for products, but now ad agencies recognize that hyper-real compositions that tell a story can reach a deeper psychology.  As Tompert’s examples show, CGI craft can convey a mood or experience that can be imagined, but not — ordinarily –photographed.

“As CGI gets more accessible, the trend will accelerate.  Even more implausibly fantastic and hyper-real constructs of the imagination will be the default for imagery in advertising,” predicts Tompert.  “It will be interesting to see how far that trend will push the envelope.  And like a lot of Mac-based applications, there’s a tendency to democratize what used to be arcane.   CGI won’t be viewed as impenetrable.  The new apps will take the big fear out of 3D and bring in a lot more people who are not very technical yet very creative.  3D tools are now opening up an avenue to express their vision.”

Written by Brett Duesing, Obleo Design Media.  A version of this story appeared in BPT Magazine Issue 22.  Editor’s note:  at the time of publication, this article referenced Bunkspeed HyperShot. Bunkspeed had announced its new product line, formally called HyperShot, would be called SHOT.  Forthcoming releases of SHOT, however, were limited to the Windows platform.  Technology from the former HyperShot rendering application transferred to Luxion, which does offer an application for Mac users under the name KeyShot.  Michael Tompert reports that Raygun Studio is now producing  effects in KeyShot for Mac.  The product name has been updated in this article to reflect these industry changes.

About Raygun Studio

Raygun Studio, a digital retouching and CGI studio in Palo Alto, California, fuses photography, retouching, and CGI for the best in photorealistic effects.  Founded in 2005 by image artist Michael Tompert represented by Kate Chase of Tidepool Reps, Raygun Studio has quickly grown a client list of design firms and agencies small and large, including BBDO, Butler Shine, Chiat/Day, Crispin Porter, Tolleson Design, Ogilvy and Y&R.  For more examples from the Raygun Studio portfolio, please visit: www.raygunstudio.com.


Use Your Illusion

Luxion KeyShot’s Power of Transformation

For professional photographers, the most exciting software releases in recent years was the introduction of KeyShot, a new approach to 3D rendering from Luxion, which made a marked departure from the engineering mentality usually associated with CAD.  Rendering used to require a large skill set, constant adjustments, and many hours to process a single 3D scene.  KeyShot, on the other hand, seemed made for artists. The debut boasted stunning real-time high-resolution previews, an extensive palette of perfect industrial-design materials, and automatic self-shadows that gave an instant illusion of depth to any CAD object.

With this year’s release of KeyShot’s follow-up, we take a sneak peek into how studios have incorporated Luxion’s CGI technology into their work. All these examples feature cars, the most expensive product to shoot professionally.  In these cases, however, no photographs of the cars were ever taken.  The photographers produced the product images solely through CGI transformations of 3D models.

Basic Black: Vond Studios, London
www.vondstudios.co.uk

“The local supercar club came to us with the idea of using a rare car like Gallardo Nera as an appealing promotional image. As soon as we got the CAD model, we were supposed to render some previews for approval,” explains Michal Baginski, a designer at Vond Studios, which builds images for automotive, product-design, and architectural clients.Vond Studios took no photos for the project.  Baginski’s team used the backgrounds, materials, HDRI files, and lighting schemes from the software package.  The flawless realism on the first render, Baginski says, is arguably better than what would come off of camera rolls after a studio shoot.

“In KeyShot, we were able to do it in minutes. The idea was to have minimal in-studio setup, so we tested a few colors of KeyShot backgrounds and stayed with standard black. For the lighting, we just loaded one of the photo studio bundle purchased from the online store and adjusted it for an even reflection that would accent the geometry of the car,” he says. “After the image was done, we just applied some smoke and flares in our image processing software on top of the actual render.  It was easy as that.We completed all the images within one day.”

On Location: David Burgess
www.david-burgess.com

Just as Luxion KeyShot’s standard backgrounds can mimic various studio settings, photographers’ own outdoor shots can create the illusion of on location settings with the aid of High Dynamic Range Images (HDRIs).  And they can even do the rendering outdoors.

“I shot the backplate and HDRI at the same time and at the same location,” says London photographer David Burgess of his colorful ad images for the Ford Interceptor. Seconds after taking the shots, he found a piece of shade from the Nevada sun and processed the image in the Luxion software.

“I had my laptop with me so I could work with the CGI model as I shot the backgrounds to make sure I liked the way the image was looking,” he says. “It is much easier to shoot alternative backgrounds when you work this way as opposed to doing everything later in the hotel or back at your studio.

“I think this was only made possible with KeyShot, as any other software would not have allowed me this freedom to look at a full color, working CGI model with complete HDRI lighting on a laptop on location. KeyShot is the only real solution for photographers who work visually rather than technically.”

Read more about the new KeyShot from Luxion at www.keyshot.com.

[Portions of this article appear in the February 2009 issue of Professional Photographer:  www.professionalphotographer.co.uk]

 


Q&A: Pushing Visual Limits of 3D

cath

In their spare time, husband-wife visual effects team uses modern photogrammetry to create stunning, lifelike renderings of historical Seattle landmarks

In 1905, Catholic Bishop Edward J. O’Dea laid the cornerstone on what would become St. James Cathedral in Seattle, Washington. More than 100 years later, as the structure still stands its ground, the husband-wife design duo of Matt and Danika Wright have pushed the limits of modern design technology and recreated the intricate beauty of the historical landmark in stunningly lifelike 3D renderings.

The Wrights are partners in Mattika Arts, a firm offering 3D modeling, rendering, illustration and photography services. As a visual effects team, they have created high-resolution 3D environment models for more than 11 movies, including Harry Potter, Master and Commander, XMEN2, Daredevil and Day After Tomorrow, as well as for projects in the video-gaming industry.

The pair didn’t create the St. James Cathedral renderings for profit. They simply had a personal wish to challenge themselves as 3D artists. While they were at it, they also modeled Seattle’s Mariner building and the Seattle skyline in a similar manner. Technology writer Brett Duesing spoke with Matt Wright about why he and his wife undertook these projects and how they achieved the amazing results.


A lot of married couples might take up something like tennis in their free time. You and Danika chose to replicate St. James Cathedral.

Matt Wright: [laughs] Yes, all these projects we worked on in our spare time on evenings and weekends around our regular work schedules over the course of a year. The purpose was not related to any business end. Instead, it was a challenge to ourselves to push what we could do as 3D artists. We have worked in the film visual effects industry for a number of years, and also the video-games industry. We wanted to see how far we could take some current technology to produce the most accurate, lifelike work. Photogrammetry is a technique we were familiar with professionally, but we hadn’t had the opportunity to explore all of its possibilities. We wanted to see if it could be applied on a very large scale and include a very high level of detail.

figure
Photographs of Seattle’s St. James Cathedral turn into accurate 3D measurements inside PhotoModeler. The key points in the 3D structure and the position of the cameras are then exported into Autodesk Maya for rendering.
figure
The wireframe model over one of the original photographs of the cathedral. “Shots like this help us gauge the accuracy of the project, and show areas that need to be refined,” says Matt Wright.
figure
figure
Final renderings of St. James Cathedral depict lifelike detail, created by Matt and Danika Wright, using Autodesk Maya.

Which software did you use?

We used photogrammetry software called PhotoModeler [from Eos Systems] to capture both the large-scale 3D measurements of the overall structure and the very fine details of the ornamentation. Essentially, whatever is in a photograph is measurable. PhotoModeler aligned our camera positions in 3D space and also helped generate 3D reference points. This camera and point data was then taken into Maya [from Autodesk], a modeler we use a lot in the film and game industry, which is good at handling very large, complex scenes. Inside Maya, we built all of the geometry, based on the data generated from PhotoModeler.

Why photogrammetry?

Whenever you’re recreating a real setting, you have to decide on what technology to use to measure the sites in 3D. The most obvious is a tape measure; however, this is rather impractical on such a large scale. Another option might be laser scanning; however, this would have been too intrusive on the sites, especially in the case of the cathedral, because of the size and amount of equipment we would have to take to these places.

Photogrammetry seemed like the perfect solution. All you need is a camera, and you can survey the sites quickly. About 20 minutes in the cathedral yielded all the photographs required to model the entire interior. Photogrammetry, when used correctly, also yields great accuracy, especially for architecture. The cost is also a lot cheaper, since it only requires the software and a pretty good digital camera — which makes it far more practical for a couple of 3D artists like us doing a little experimental project on the side.

Was it difficult deriving the 3D geometries from photographs?

Not really. First you take your camera — we used a Canon EOS 10D, which is a regular digital SLR camera — and you run it through a few calibration processes within PhotoModeler to get accurate information about the lenses and distortion and all the camera’s interior parameters. These camera specifics are how PhotoModeler can calculate the actual distances.

After that, it’s nothing more complicated than taking a few pictures of what you want to model. You download those pictures to your computer and pop them inside PhotoModeler. Given the pictures and also the camera’s technical parameters, PhotoModeler interprets the scene in three dimensions.

You start by matching up points between images — pick the corner of a chair in one image and the same corner in another image, for instance. You add maybe 20 matching points over the images. Doing that to a minimum of three images, you’ll start seeing this 3D scene emerge in PhotoModeler. The software will also work out the camera positions (the point where you took the picture) relative to each other, which comes in handy later when you put the final model together in Maya. All of this information can then be exported to Maya, where you have the correctly aligned cameras/photos and all the reference points that you marked in 3D. From these, you can start modeling your scene using the tools inside Maya.

All these models have so much detail — millions of vertices in one scene. In reality, you said your 3D depiction of St. James Cathedral took the equivalent of about three months of solid work to create. What were some of the shortcuts you used?
Looks can be deceptive. We did reuse some of the components throughout the model, copying the pieces, like the crown molding on the columns, and repeating them around the nave. Architecture is all about repetition, so there’s a lot you can copy. Some of the arches through the cathedral are the same, just different sizes, so all that is required is some scaling of the base curves. You can copy them over and make the modifications to it to make them fit. And, obviously, if you have 500 chairs that are all the same, there’s no point in building each one.

One of the interesting problems we came across with all these projects: We are not dealing with brand-new construction, where everything is 90 degrees and just about vertical. This is old architecture that has settled over time. Copying details was good, but it took a lot of tricky alignment because walls aren’t vertical and not perfectly square. The Mariner building represented about two months of work, and it turned out to be a lot harder than the cathedral because the structure had some real settling problems and hardly any 90 degree angles whatsoever. That’s when it became extremely time-consuming. You weren’t working on any flat plane to which you could align a modeling grid. Otherwise it would have been much quicker.

figure
Seattle’s Mariner building, with a wireframe model superimposed over an original photograph.
border=”0″ alt=”figure” width=”475″ height=”353″ />
A final rendering of the Mariner building in Autodesk Maya. Matt Wright comments about the photogrammetry process: “One thing that we learned very quickly was that photogrammetry is very different to regular photography, or even photography for texture/reference work. You constantly have to think about angles between camera shots, what you have in frame, making sure there is enough detail and depth in the image.”

Do you think that staying true to the actual building measurements — flaws and all — made a difference in the final rendering? Was it worth the extra work?
In my opinion, yes, absolutely. We didn’t want to make the building dead-vertical with proper 90-degree corners. That’s not what that building was about — it isn’t that way in reality. Visually, it’s perhaps a little more of a subconscious thing. In the rendering, the walls may look perfectly vertical and the corners look perfect. But if everything was truly squared up, it probably would not look quite as realistic.

How did you deal with the enormous size of the 3D models? Did that present any limitations?

That’s why we used Maya for the final modeling. Although you can model objects very well in PhotoModeler, Maya is designed for dealing with very complex scenes and huge amounts of geometry. If you work smart and keep your work organized, Maya can handle an almost unlimited amount of data. There was really no problem in that respect working on this whole thing.

One way to work smart in Maya is to choose your geometries beforehand — whether you’ll model a section with NURBS surfaces, subdivision surfaces or polygons. This helps to manage the file sizes. We tried to keep everything in its original geometric form, right up until rendering. Traditionally at render time, you would convert the NURBS to polygons, and a lot of people would convert them to polygons long before that point. We actually kept everything in its place until the end, just to try to keep file sizes down.

Wherever possible, we tried to use instancing so at any one time only one full copy of a complex piece of geometry was stored in memory. So we didn’t have the memory overhead for 200 column tops. Each column top might have 110,000 polygons. That alone, copied around the scene by itself, would be millions and millions of polygons. With instancing, we only need one version in memory.

figure
figure
The downtown Seattle skyline, captured by photogrammetry and rendered as a 3D model (top), then textured (bottom). Matt Wright says, “Modeling an entire city is a future project that we have. The downtown skyline was a bit of a test to see how far we could go. It would be interesting to see how far you can push this technology and how much time it would take to reproduce something as massive as a city.”

PhotoModeler has been used for a wide variety of applications — industrial and scientific measurement and reverse engineering — but not so prominently in the animation industry. Why did you turn to this solution?

We actually started off using another product but discovered that it didn’t take into account all principles of lens distortion and principal point of the camera — two factors that play a big part in the accuracy of photogrammetry. If your photogrametry solution isn’t completely accurate, by the time you’ve added 10 or 15 cameras, the end solution won’t solve, and you’ll have no idea why.

That’s when we found PhotoModeler, and right off the bat the calculations were a lot more accurate. It has amazing tools for analyzing error, and it has an incredible feature called Idealize, which corrects all camera distortion in your images and recalculates the scene directly. Maya and most other 3D software cannot deal with distortion, so you have to remove distortion from the images before taking them into your 3D software. PhotoModeler is one of the first tools that come to mind whenever we have to recreate architecture now, or recreate anything.

# # # A version of this article was published in CADalyst.

About Eos Systems

Eos Systems Inc is the developer of the award-winning PhotoModeler software and is the leader in versatile close-range photogrammetry solutions. PhotoModeler provides an easy and affordable solution for measurement or reverse engineering of objects into 3D CAD through the use of photographs. The software is used by thousands of companies specializing in crime and accident reconstruction, archeology, architecture, engineering, surveying, film and video animation.  Eos Systems is headquartered in Vancouver, British Columbia. For more information about Eos Systems and PhotoModeler, please visit:  www.photomodeler.com.