Area 51 artists had already done a version of the Gemini capsule for another television project, so they brought some images to their meeting with Farino. Tim McHugh, visual-effects supervisor at Area 51, says, “He looked at the image and asked, `That’s a model, right?’ And I said, `Well, yeah, it’s a model of sorts–but it’s a CG model.’” A few days later, Farino called and asked McHugh if the studio would like to build a CG version of the Mercury.
The trickiest shot Area 51 worked on, says McHugh, was of Buzz Aldrin riding on the Gemini capsule. “We built a complete astronaut suit. It was all done by hand,” he says. Their first thoughts were to digitize the astronaut suit, he says, so a glove was sent out for scanning as an experiment. “But the data we got back was so messy, it wasn’t worth it.”
Instead, Dave Carlson hand-modeled the suit. McHugh says they sold Hanks and the executive producer, Tony To, on the CG version of the astronaut by first completing the glove, then shooting an actor in an astronaut suit in front of a blue screen and replacing his hand with the CG version. “We sent it over in the daily. Tom and Tony looked at the image and were wondering why they were even looking at it. When we told them the glove was CG, they backed up and looked at it again” says McHugh. “They couldn’t believe we had pasted in this synthetic glove.”
For a shot of the Mercury flight, Area 51 created everything from the ship to the launch tower, says McHugh. But his favorite part of the sequence was the reentry when the ship’s parachute opened, and it plunged back to earth. “They were going to take a capsule to the desert, launch it off a tower, blow the parachutes, and hope it landed OK. It was a very expensive proposition. But we did some test shots, and they decided to do it all in CG. I don’t think the shot would have been in the show if we hadn’t figured out this technique for doing it,” he says.
To create the cloth for the parachute, artists used a combination of bump maps, surface deformations, and morph targets. “To make the shot even more challenging, the parachute had to travel through a cloud layer, which was multiple planes of transparency” notes McHugh.
The results, however, were more than worth the effort that Area 51 artists put into the shot. “We’ve had people ask us, `What exactly did you guys do in those shots?’ And we say, `Oh, everything.’”
In the movie Armageddon, an asteroid is threatening to destroy the earth, and it’s up to actor Bruce Willis and his crew to save mankind. Over at Dream Quest Images (Simi Valley, CA), the CG artists had their own mission for the movie: create an asteroid, shuttles, and other space effects so convincing that viewers will buy into the story line.
As it turned out, the toughest part of this challenge was creating the asteroid. To start, Dream Quest made a two-foot maquette, according to Richard Hoover, visual-effects supervisor for Armageddon. The maquette was scanned and turned into a “huge” NURBS model. For further modeling and adjustments, artists used Alias/ Wavefront’s PowerAnimator. But to create and render the particles and gases emitting from the asteroid, Dream Quest wrote proprietary software (Jim Callahan was the primary developer, notes Hoover). “With our software, you can fly right through the particles without them getting grainy,” he says. “It also does object recognition. Plus we put a bunch of different animation handles on the particles. We also did some spectacular things with turbulence and forces that move the gas around. It was a learning curve. Every week we were able to do more.”
Another seemingly simple but actually quite difficult effect was handling the star fields. In fact, Hoover says, Dream Quest wound up developing its own star renderer, written by Sean Jenkins. Says Hoover: “Typically, when you render stars and you have to apply motion blur, they don’t work well together. The stars get dim when you blur, and then when you stop, they get bright again. So Sean developed this program that alleviated the change in between motion and stop and made it an even exposure.”
All the stars are accurate, too, says Hoover. “If an astronomer looked in detail at the sky, he would see that the stars are correctly positioned.”
The shuttles in the movie are a mix of physical and CG models because it was often easier to motion-track the CG shuttle, notes Hoover. He says they took extra pains to make sure the two looked exactly the same. “We did a shot with two shuttles–one was a model, one was CG. We put them onscreen and compared and figured out what we had to do to make the CG look exactly like the physical model.”
One step was to take a different approach to texture mapping, an endeavor headed up by artist Mark Segal. Says Hoover: “Instead of photographing the model with still film and scanning in those photographs to create textures, we took the motion-picture camera and shot frames of the shuttle 360 degrees around, horizontal and vertical, and scanned them like we would a plate. Then we tiled and loaded them in.”
The results? “We showed the [CG and physical] models to people who didn’t know which was which, and no one could tell.”
Two huge comets are on a crash course with the Earth. How does the U.S. government and the population at large deal with impending doom? That’s the question posed by the movie Deep Impact. But over at visual-effects house Industrial Light & Magic (ILM; San Rafael, CA), CG artists were dealing with a different question posed by the movie’s producers: How do you make a CG comet look both real and menacing?
“We were told up front that the comet is a character,” says Bill George, co-visual effects supervisor for the movie along with Scott Ferrar. “It’s `the bad guy’ so it had to be mean and nasty and frightening–yet the producers wanted it to look realistic. We did a lot of research. Basically, a comet is a big fuzz ball. So we started following down that path, but after awhile, we realized it just didn’t look threatening–it looked fluffy. So we decided to make the rock more visual and stripped away some of the layers.”
While it may seem like particles would be a natural choice for a phenomenon such as a comet, ILM instead used a patch system (although some particles were used for extra detail). “We initially started using particles, but then we discovered that just a teeny adjustment changed the look of the whole thing,” says George. “We found that the patch system worked a lot faster.”
As such, the comet is actually a wireframe model layered with transparency and other maps. ILM used fractals to impart motion, explains Ben Snow, computer-graphics supervisor. “If you’ve got a texture map, you can change the way you look into that map using a fractal-change technique. If you’re indexing into a texture map, you can use a fractal to modify the point that you’re sampling in the texture. And if you’re animating with fractals, which we were, it changes even more.”
ILM did use particles extensively for one scene, notes George, when the astronauts land on the comet to try to blow it up. The surface of the comet needed some element to give it a sense of energy and atmosphere. After several trials, the producers decided to use fake snow. But as it turns out, just as stars can seem deceptively simple to produce, so can snow. “Scott and I thought, `Let’s shoot and we’ll add the snow later.’ The problem is, you set all these procedural things, like speed, movement, and jitter, and then the snowflakes looked like they were alive–kind of like bees. It was really hard to make them look like they were just being randomly blown around by the wind.”
Trial and error paid off, though, and ILM got the look it was after for both the comet and the snow. Overall, ILM artists created about 129 comet shots for Deep Impact. Of the work, George notes, “All the shots were challenging.”
Several special-effects houses worked on Lost in Space, the movie version of the popular 1960s sci-fi TV series. The Magic Camera Co. (MCC; Shepperton, England) created one of the movie’s more extensive scenes, the opening 3 1/2-minute space sequence in which Major Don West (Matt LeBlanc) and crew go to battle in CG Bubble Fighters against terrorists, also flying in CG ships called Sedition Raiders. The mission was to protect the Jump Ring, a structure under development for transporting spacecraft. That, too, was computer-generated.
MCC created all the spaceship models from scratch using conceptual artwork supplied by the Lost in Space production department. To model the Jump Ring, MCC artists used NewTek’s LightWave. The Bubble Fighters and the Sedition Raiders were modeled and animated with Kinetix’s 3D Studio Max.
Actually putting the actors in their CG fighters and matching the 3D animation and film proved to be tricky tasks, however. First the actors were filmed on a green-screen stage using a specially built remote-control gimbal rig that served as the cockpit, says Angie Wills, digital-effects line producer at MCC. MCC artists then constructed the rest of the Bubble Fighter around the cockpit using 3D Studio Max.
To match the 3D animation and film footage, MCC developed a special plug-in that lets artists take data from the motion-control rig and import it into Max, says Alan Marques, digital-effects supervisor at MCC. “The rig came with its own unique, specially written software that gives you the virtual camera position in space. Instead of building an entire replica of the rig in the computer model to try to get that data, the software can tell you where the camera is in meters from a zero point. It will give you the xyz coordinates of the camera position in space, and it also will give you the xyz for the camera’s target.”
Max supports a target-based camera but, unfortunately, cannot load in raw ASCII data, which is what the software provided–”literally, a stream of data for the camera position for every frame,” says Marques. So to get that data into Max, MCC wrote a plug-in.
The sequence was shot with an anamorphic lens, which squeezes the image horizontally, adds Marques. “If you load that into a CG system, it has no idea about lens curvature and nothing will line up” he says. “To remove that, we basically made up a black-and-white grid that we shot for every lens on the shoot. We positioned the grid a set distance from the lens so that we could see how the grid lines distorted with curvature for each shot. Then we used Elastic Reality and made a warping mesh based on the distortion of the grid. It tracked each curved line and made an appropriate line that was straight.”