Sunday, January 6, 2008
Sunday, December 9, 2007
Ken gave us the music of animation overview in this speech.
First thing he mentioned is the power of a musical score. Music can create the lots of motion of the screen. It can make your film better or worse just depending on how you set it up and when you do it. Someone said that music can save your bad acting. Second, he talked about location, time area. There are three basic functions: one is play what you see; second is play subtext (not what you see) of character can be. Third is play against the action of a scene.
Ken also talked some worst things to music. For example, music can not be done to save a scene. Never too early to bring in composers. When talking to your composer, try to using emotional terms. Don’t try to speak in musical terms. What you need to do is let your composer know what you emotional feel or you think the audience should emotional feel about the scene. That is you talking about music for. Maybe play a CD or other scene from other picture but don’t tell them you want that music. Tell them this is emotionally what you feel. Give your composer a chance to react that and come up his/her idea. Never try to explain how they should do it. Let them work with sound.
Song of the story can set up your movie. Playing a song in the beginning and it can be considered a whole movie and develop your movie underscore. Never discount how you set up your first scene of movie. The same music can be playing as different things as tension, as trace, as love. There is no exception that all forms music is emotion.
Ken showed many clips of movie to explain what he talks about. He showed “Sum of All Fears” to mention first introduction of other sounds to the score. He showed “Mulan” to talked about the experience as a musical. Finally, there is one tip of animation about music or lyric Ken gave us: Do it. Animate to a piano, animate to what going to be, what tempo going to be. Write it down when you got some idea and you can trace that later. Don’t use material that you can’t use. That is a monster.
Friday, December 7, 2007
To Bill Whittington, I asked, “Sound is crucial for the credibility of visual effects. How are sounds made stylized, iconic, and distinct for each film?” At first I thought I was asking a specific question, but as I read Bill’s book, Sound Design and Science Fiction, I realized my question pertained to the art of sound design as a whole. As a Critical Studies professor in the USC School of Cinematic Arts, Bill Whittington writes unique and important work as an author of film criticism, with a strong understanding in the art and practice of sound design.
His book, Sound Design and Science Fiction, details the development of sound design from the experimental work of Walter Murch and Ben Burtt, where the concept of a sound designer was conceived, to the highly crafted and creative work of Dane Davis. He discusses how after 2001: A Space Odyssey, audiences grew increasingly conscious of a film’s sound track in terms of genre expectations and the meaning it conveys. He describes Walter Murch’s work on THX 1138 and Ben Burtt’s work for the Star Wars series. For example, the character of R2-D2 communicates completely through rhythmic electronic sounds which function as a subliminal language. This character is the comedic relief for the series and his charm is carried not by his limited movement and expressionless face, but through his articulate and expressive sound effects ‘language.’
Whittington goes on to describe sound design used for thematic effect in films such as Alien and Blade Runner, where sound informs the audience about the character and the environment. In Alien, the mechanical spaceship is infused with the biological sounds of rain, heartbeats, and breathing. The sound of the ship conveys a sense of a living mechanical entity. In Blade Runner, the removal of the voice-over in the Director’s Cut completely changed the meaning of the character of Deckard. This version was much more successful. It demonstrates the importance of thoughtful sound design for science fiction films and other genres.
As the art of sound design grew and more films required special sound textures, technology also progressed to allow multi-channel presentation, where “sound could be hung in a room like a production designer would hang textiles on a set.” This enabled the sound design to immerse the audience in the environment of the film and direct the eye to the right part of the screen. Whittington gives the example of Gary Rydstrom’s work on Terminator 2: Judgment Day, where he uses bone crunches and breaks oriented to different parts of the screen to make the violence in the film hyper-real.
Finally, Whittington concludes his research with a look into the future of DVD’s, multi-channel home theaters, and video games. In terms of sound design and science fiction, he examines Dane Davis’ work on The Matrix. By knowing the history and practices of sound designers that came before him,
In my research of Bill Whittington’s work, I have learned to appreciate the changes in the science fiction genre from a B-picture status to the artistic merit it now enjoys. The book celebrates the spirit of experimentation that pushed the medium of film sound to the sophistication it has today. Sound design and visual effects are the main reasons why the cinema drastically changed after 2001: A Space Odyssey and Star Wars. As audiences became conscious of sound and had higher expectations of it, this required an artist who could specifically record, edit, and mix sounds that propel meaning into deeper realms and complete the experience of the spectacle. The sound designer brings credibility, insight, and impact to the film. For this reason, the sound designer is now an integral part of the collective art of cinema.
Wednesday, December 5, 2007
Tomlinson began his lecturing by talking about the revolution of sound in 1928 and how it changed the history of film and the lives of many. The way films were shot changed because the cameras had to be isolated due to their noisy nature and so everything that was filmed was static and almost theatrical. So with the advent of sound something else was lost in the process.
He began to do an illustration of the way sound can be perceived at it beginnings. The drawing is that of a small box and at its base is bounded by the frequency range, which is the range from base to treble. Treble was about all the details and during that time it was incredibly hard to get that sound off of film and back on to it again. The bandwidth or range of sound during that time was very limited and so the experience of sound was barely there.
The vertical part of the box represented the amplitude range, and that means the loudest sounds one can hear to the smallest sounds. This was also a very limited and dynamic range. During that time what sound engineers would do is play something very loud and then play something very soft and that way they contrast each other.
There were two dimensions to sounds back then and a third, the spatial aspect, was nonexistent. With the progress of history and war and so on sound started to get more developed and one of the people that set these advances in motion was Disney and his cooperation with the musical conductor Stokowsky. They began with the experimentation into stereo sound by 1934. That initiated the need for multi channel sound and “Fantasia”, 1939, was the first to do so. The channel at that time consisted of three only, left, right and center. So as it is known it was Disney himself who invented the idea of surround sound. The making of that system and its implementation at the time was very costly, limited, and sadly unsuccessful.
War world II happens and after that the popularization of television began and as a result less people began going to theaters and the industry crashed. The movie industry began to fight back and because of that many advances occur cinematically and also audio wise. By the eighties with the digital era sound advances happen also. In 1987 a committee of the Society of Motion Picture and Sound Engineers were asking how many channels needed to be on a motion picture print. Tomlinson’s response was 5.1, Left, right and center across the front, left surround and right surround and a single low frequency channel. The low frequency is needed because our ears cannot localize low frequencies well. By the time we get to this system the other two dimensions have been expanded and the frequency and amplitude range are in line with the audience and cannot be amplified any more than they already are.
5.1 was named by Holman in 1987 and came out on film in 1992 and became popular since then. What comes after this is the addition of more channels and from that comes the 10.2 sound system which is twice as good as 5.1. How far the number of channels can go is known and largely speculated but the most logical answer at the moment is 10.2.
The purpose of 10.2 is to allow much greater flexibility for sound designers and create a far more immersive environment for the audience. With these channels, it is possible to recreate the acoustics of nearly any location with astonishing realism. Holman found that the second most important sound wave to hit the audience - after the one from the source - is the one that comes from a point on the ceiling, halfway between them. This is because most rooms have hard and reflective ceilings, but the walls are semi-absorptive due to objects in the room; while the floor, usually covered with carpet, absorbs most of the reflected sound. So this first, overhead reflection reaches the ear at a slightly different time, allowing the brain to both localize the primary sound and compute the size of the room. By placing two speakers 45° above and to the left and right the audience, this key sound wave can be recreated. The other speakers can fill in the other major reverberations from the sides and off the back of the room, recreating a full acoustic signature. The strength of traditional 5.1 surround is that its left and right surround speakers are diffuse; they spread the sound around the entire area. This helps to prevent the " Exit Sign Effect" - audience members looking away from the screen at the source of a localized sound, not realizing it is part of the movie. However, this diffusion carries a cost in flexibility. Therefore 10.2 augments the LS (left surround) and RS (right surround) channels by two point surround channels that can more finely manipulate sound - allowing the mixer to shift sounds in a distinct 360° circle around the movie watcher.
The .2 of the 10.2 refers to the addition of a second subwoofer. The system is bass managed such that all the speakers on the left side use the left sub and all the speakers on the right use the right sub. The Center and Back Surround speaker are split between the two subs. The two subs also serve as two discrete LFE (Low Frequency Effects) channels. Although low frequencies are not localizable, it was found that splitting the bass on either side of the audience increases the sense of envelopment.
The work that Holman creates is all about achieving an immersive environment and really enhancing the experience of movie watching. It is always amazing how sound can easily enhance and even make a film sometimes. His work is definitely a great advancement in film history.