Saturday 17th May

As of now, 01.10pm, I'm just about to pop out and get MAMMA a couple of pints of milk. No, not from the cows up in the fields, but from... yes you got it... a shop.
Dear oh dear...

I as just browsing, and visiting the star wars site.

I found this interesting article. To anyone out there following the development and use of Digital Cameras in films, this is really great.

I've copied the entire thing:

Defining High-Definition
May 16, 2003

Moving Forward

The news is out that Episode III will be shot with the new generation of digital high-definition (HD) camera, the HDC-F950. Stories of this type are usually filled with confounding acronyms, numbers and highly technical information. Just what does it all mean, and why is it important? ILM's HD Supervisor Fred Meyers helps clarify some of the technology.
Episode II was the first major motion picture to be shot entirely on digital cameras. Sony developed the first generation HDC-F900 camera. By not using film, the production team saved the time and money usually invested in film stock and photochemical processing, and was able to attain an image of incredible clarity already in the digital medium ready for postproduction use.
For effects-intensive movies like Star Wars, imagery shot on film would have to be scanned and converted into digital information for the artists at Industrial Light & Magic to incorporate their amazing effects. By starting in the digital medium, the use of HD cameras saved a time-consuming step, and kept the picture digital throughout the production pipeline, from the editorial department, through to effects, through to the final mastering, and -- in select theaters -- through to film-less digital projection.

Director of Photography David Tattersall (left) and High Definition Supervisor Fred Meyers discuss luminance readings during a recent camera test.

The First Generation
"Episode II used the first generation of cameras specifically tailored around a new HD format called 24-p. That was the enabling technology to produce a digital camera and a digital recording system that could be used for a motion picture project," says Meyers. Though the HDC-F900s are seen as the first step, HD cameras had existed before that for broadcast applications such as television. The 24-p, which in this case denotes a 24-progressive frame rate, was the breakthrough.
A frame rate is the number of individual still images that are played back in a second, which when viewed sequentially, produces the illusion of movement. You may have experienced this phenomenon when working with simple flipbooks. When the human eye sees similar still images in rapid succession, it combines those images into motion.
Broadcast video plays back at a different frame rate than traditional film. Traditional film projected in theaters flickers at a rate of 24 distinct still images per second. Video plays back at 30 frames per second, and it's not distinct images but often "interlaced fields," where two images are on the screen at the same time, drawn electronically in alternating lines. This disparity between the playback nature of broadcast and film has long been a hurdle when moving images back and forth between the electronic and physical worlds.
The main accomplishment with the HDC-F900s used for Episode II was that they shot at 24 progressive frames per second. The 24 frames fit in perfectly with the traditions of film projection and editing, and the progressive frames meant that each frame was a rock-solid image, and not an interlaced halfway point of merged fields.
As with all digital innovations, as soon as the first generation is produced, a second generation of improvements is waiting around the corner. The improvements focused on three related pieces of technology: the lenses (which gather the light), the camera (which turns that light into image data), and the recorder (which stores the image data). Lucasfilm provided Sony and Fujinon with detailed feedback from the trailblazing efforts of Episode II to build a better image acquisition system.
"They listened," says Meyers. "Sony and Fujinon have devoted some resources into delivering something that is going to raise the bar for digital acquisition. We were in close contact with Sony about our Episode II experiences, and our hopes to keep the momentum going to see incremental and substantial new features in the camera and the recorder. We shared what we learned, and Sony came up with a new camera format that could be based on the 900-series camera, and had much cleaner image output than the first generation."

Third Generation Lenses
Facilitating the gathering of sharper, cleaner images are the latest generation of lenses from Fujinon. "They listened to us on our experiences on Episode II, and they've made significant improvements on the quality of the lens and the usability of the lens in a motion picture style environment," says Meyers. The new E Series lenses are now considered third generation, and while better suited to cutting edge digital cameras, still retain the familiar user-interfaces that traditional cinematographers are familiar with.
Last year, Fujinon's Cine Super C series of zoom lenses were used for the majority of visual effects photography, including motion control, miniature, greenscreen and pick up shots. Now, the Cine Super E series will be used for all photography, including on-set, location, and postproduction.

Image Quality & Compression

Image Quality: What's the 4:4:4 for?
One of the improvements with new HDC-F950 was a re-examination of the way the camera gathered image information. All cameras are built for the same basic function: the gathering of light onto an imaging surface. Whereas film cameras use lenses to funnel light onto the photochemical surface of unexposed film, digital cameras sample that light, and assign numerical data to describe its qualities.
The first HD systems were primarily developed for use in broadcast (the transmission of visual data through electronic signals) and not theatrical exhibition (the transmission of visual data through the projection of light). Broadcast applications favor different qualities of light than theatrical projection does. The first generation HD cameras followed the broadcast standard of capturing light at a YCBCR 4:2:2 output, while the new HDC-F950 provides RGB 4:4:4 output. So, what does that mean?
The letters (YCBCR, RGB) represent the type of data recorded, while the three numbers (4:4:2, 4:4:4) are representative of how detailed that data is sampled. Whereas RGB stores red, green, and blue light data, YCBCR stores the differences between colors.
"The Y stands for the luma, or brightness data, which your eye is most sensitive too," describes Meyers. "The CB and the CR stand for differences between the luma and the color, and those differences the eye is somewhat less sensitive to." The numbers indicate that that the luma is stored at twice the information complexity as the differences (4:2:2). The new format records the colors red, green and blue each at a full 4:4:4.
"The broadcast formats use a different way of storing and transferring color and luma information -- the brightness and the hues -- out of the camera that are based on sampling rates. The original first generation used a system that saved bandwidth -- it reduced the amount of data that comes off the camera to be recorded onto tape. That fit very well with the broadcast format, but was not the most direct path to go into motion picture postproduction," explains Meyers. "So Sony changed that format from YCBR 4:2:2, which was 8-bit, to a RGB 4:4:4, which is 10-bit. That's more numbers to represent each color and each brightness value of the pixels, and a true RGB format which is what is used in feature post-production, film recording, and digital cinema."
Having a richer, purer image to start results in increased flexibility further down the production pipeline. "Not only is this good for the amount of subtle differences in brightness and color that can be output from the camera and recorded onto tape, but it also means there's more information to manipulate in computers," says Meyers. "If you're going to enlarge an image, or if you're going to take a bluescreen element, extract it and replace the blue with other elements, you have more data to process and you get improvements in the quality."
Kiss Compression Goodbye
While the use of digital cameras has eliminated the generational image degradation experienced in film production, there are still processes that lessen the quality of a digital image.
In the traditional photochemical process, the production pipeline that added visual effects to filmed imagery greatly degraded the quality of the original image. A complex device called an optical printer would combine many separate pieces of film -- the starships, the laser blasts, the space background -- into a single composite image. Each layer that was added muddied the quality of image -- much like repeatedly photocopying a photocopy results in a degraded duplicate of the pristine original.
Digital imagery does not undergo such degradation, since the numbers that define the image remain intact throughout the pipeline. However, to better run that cumbersome image data through the pipeline, sometimes the images are compressed or subsampled early on in the process.
Digital information can be averaged in such a way to reduce size. A large area of similar color, for instance, may be averaged to a single color, thus requiring less data to describe the differences. A background that sits mostly still, or an area of image information that remains mostly the same for a stretch of time is averaged, eliminating minute differences from frame to frame. Too much compression results in artifacts -- telltale imperfections that a trained eye can spot, especially under magnification. There's always a trade-off between image quality and image size.
"There are many different types of HD," says Meyers. "It's almost like you're talking about an engine: there are high horsepower and low horsepower engines." This new version has considerably more "horsepower," as it can handle uncompressed data, preserving image purity.
Though Episode II was output at a resolution of 1920 pixels across and 1080 scanning lines deep, the initial image data was often less than that, and was enhanced up to that size by image processing within the camera and the recorder. "In this version of HD, the actual amount of image data that is captured with the new camera and the new recording format is now the full 1920 pixels across," says Meyers.
"The first generation of camera used certain techniques to conserve the amount of storage and reduce the amount of data that had to come off the camera and into the recorder. The technical terms for those techniques are spatial and chroma subsampling," explains Meyers. "The output from the camera was employing chroma subsampling, and the recording system was employing spatial subsampling. Those two techniques are eliminated in the new camera and the new recorder. Also, the new recorder uses substantially less image compression."

Slow Motion Instant Replay
Though Star Wars movies generally don't have slow motion shots for dramatic effect, the technique is vital for ILM miniature photography. The illusion of a lethargic pace to camera moves helps miniature environments look bigger and more realistic. Achieving slow motion in traditional film cameras is an easy, mechanical process. Achieving the same results in digital has required some clever solutions.
In traditional film, a process called overcranking creates slow motion. The term dates back to the old days of physically hand-cranking film through a camera. If film runs at a faster frame rate than 24 frames per second, more frames are used capture an event of a specific length. When that film is played back at 24 frames per second, that event now lasts longer, as it occupies more frames. So, if a camera runs at 60 frames per second, captures a second-long event, a 24-frame playback will slow that event to two and a half seconds long. Similarly, undercranking a camera ends up speeding the motion, since fewer frames are committed to capturing an event.
"During the postproduction process, we made some suggestions to Sony based on what we had learned on how they might implement a feature in their camera systems that would allow motion control and miniature photography to simulate overcranking and undercranking, and do that in real-time, in the camera," says Meyers.
The ILM solution during postproduction was to average the frame imagery to the desired speed. It was a software solution that was done out-of-camera, in computers once the footage was already recorded. The new HDC-F950 can now do this in camera.
"Sony was able to implement in hardware much of that software process, allowing it to be done in camera and faster," says Meyers. "That allowed us to gain more sensitivity in a better quality when we use that technique in postproduction."
SRW-1 and SRW-5000 are not droids
The recorder is the next step in the chain, as the information coming from the camera needs to be stored somewhere. Advancements in one piece of technology necessitate advancements in the next, since it would do little good to have a recording system that degraded the new quality level captured by the HDC-F950.
The previous generation camera was cumbersome, since it included the recorder and the cassette inside the camera. The new camera is much smaller as it does not contain the recording unit within its frame.
"There was a kind of a one-and-a-half generation, if you will, of the F900 that we used in post-production. It had a new way to interface the camera with the recorder, which was using fiber optics, and that allowed us a little more ease and flexibility when we were doing our effects model photography." This streamlining is now standard in the F950, which directs its output either to the SRW-1 portable recorder, or the studio SRW-5000. These recorders lay down the digital information onto specially forumulated BCT-SR series videocassettes.

To the Future, The Horizon
As with all things digital, innovations continue at a hurried pace. "Things seem to be moving. Every three years, we seem to see a fairly significant improvement," says Meyers. "You're going to start to see improvements come from numerous companies. Right now, it's the early adopters that are encouraging this, as it becomes more mainstream, we'll probably start to see more people involved in it, and there will be even larger improvements."
Along with digital acquisition, improvements are being made on the digital cinema projection side of the coin as well. Lucasfilm has been working with key vendors such as Texas Instruments to improve the quality of digital projectors. "We're looking at things on new digital projectors now, such as the Episode III camera tests, and we're seeing things that we've never seen before," says Meyers. "It's an amazing improvement in the quality."


Hope you found that as interesting and informative as I did.
If you didn't, well, boo hoo.


I'm off now, at 01.20 pm now, to pick up that milk. I'll finish this entry later.



No, not literally, but you know...

His book THE STAND was voted one of the top ten best of the last century. I own it but haven't read it yet.

Here is what he says about writing horror:

What the author says about his work:
"If you ask me what I am, my first answer would be, I'm a husband; my second answer would be, I'm a father; my third answer would be, I'm a writer."

"Cancer, heart attacks, strokes, diabetes, people trapped in burning houses - and still we go on. The skin of the world is thin, that's what my books show." "There's a lot of mystery in the world, a lot of dark, shadowy corners we haven't explored yet. We shouldn't be too smug about dismissing out of hand everything we can't understand."

"I and my fellow horror writers are absorbing and diffusing all your fears and anxieties and insecurities and taking them upon ourselves. We're sitting in the darkness beyond the flickering warmth of your fire, cackling into our cauldrons and spinning out our spider webs of worlds, all the time sucking the sickness from your minds and spinning it out into the night."


I think I'll see how far I can get into THE STAND tomorrow. I know it's a monster... but it might be able to do. I might be able to devour it like a three tonne hamburger covered in whipped chocolate cream.
Or maybe not.


You can visit 'the big read' and see for yourself.
Click on the image below.


Right now, outside, it sounds like a thousands bombs are falling. It's really the massive amounts of fireworks that they are pumping into the air, with acrid smoke thats filling every open window in the house.
We would like to close the windows but... we'd suffocate on our own futility towards each other.

Right. Off now.

Look after each other.

No... check back... KILL EACH OTHER.
Hmmm. That feels better.