The other NASA lens – the Angenieux f/0.95

Before the Zeiss f/0.7 there were other lenses used in the space race. The Ranger program was a series of unmanned missions to space launched by NASA in the early 1960s, primarily to obtain the first close-up images of the surface of the moon. Ranger 1, launched in August 1961 failed to launch. It was not until Ranger 7, launched in July 1964, that the first high-resolution images of the lunar surface were obtained.

The mission carried six lenses, two wide-angle, and four narrow-angle that transmitted on two channels. The F (for full) system had one wide-angle and one narrow-angle camera. The P (for partial) channel had 4 cameras: two wide-angle and 2 narrow-angle. The images provided better resolution than was available from Earth based views by a factor of 1000. All 6 cameras were RCA-Vidicon slow scan TV cameras using C-mount optics.

Three of the cameras (A,P3,P4) had a 25mm f/l lens and three had a 76mm f/2 lens [1]. The wide-angle lenses used were made by French optical company Angenieux and were 25mm M1 lenses with an adapter attached to mount them to the Vidicon cameras. Strangely enough the NASA documentation specs [1] these lenses out with f/1.0 apertures, but these lenses seem to actually be f/0.95.

The P.Angenieux Paris 25mm f0.95 Type M1 was developed in 1953. The patent for the lens, was issued in 1955 [2]. It is a 8 element lens in 6 groups. It is derived from the Gauss-type, from which is differs by the fact that each of the front lens, and the rear lens is subdivided into two lenses. This allows for the increase in relative aperture while retaining good correction for spherical aberrations.

You can still pick up one of these lenses today for circa US$500.

Further reading

  1. Ranger VII Photographs of the Moon Part I: Camera “A” Series, Jet Propulsion Laboratory, California Institute of Technology (August 27, 1964)
  2. Pierre Angenieux, “Large Aperture Six Component Optical Objective”, US Patent #2,701,982 (Feb.15, 1955)
  3. Adorable 25s – 25mm F0.95 Speed Lens Comparison on Lumix GH3, 3D-KRAFT! (Feb. 2013)

Ultrafast lenses – the Zeiss Planar 50mm f/0.7

The quintessential vintage ultra-fast camera lens is the Zeiss Planar 50mm f/0.7. It was developed in 1961 for a specific purpose, namely to photograph the dark side of the moon during the NASA Apollo lunar missions. Only 10 lenses were built, one was kept by Zeiss, 6 went to NASA and 3 were sold to director Stanley Kubrick. Kubrick used the lenses to film scenes lit only by candlelight in the movie “Barry Lyndon” (1975).

There is a similarity, at least in the double-Gauss optical design – it is essentially a Gauss front with two doublets glued together and a rear group which functioned as a condenser. (copies of optical diagram). The 50mm f/0.7 Planar was designed by Dr. Erhard Glatzel (1925-2002) and Hans Sauer. It is supposedly based on an f/0.8 lens designed by Maximilian Herzberger (1900-1982) for Kodak in 1937. Looking at the two schematics, they look quite similar. The idea is to take the 70mm f/1, and by adding a condenser, brute-force the lens into a 50mm f/0.7. The condenser actually shortens the focal length and condenses the light – in reality adding a ×0.7 teleconverter that gives 1 f-stop.

But this lens has an interesting backstory. According to Marco Cavina, who has done a lot of research into the origin of this lens (and others), the design of this lens was derived at least in part from lenses designed for the German war effort. During WW2, Zeiss Jena designed a series of lenses for infrared devices to be used for night vision in various weapons systems. One such lens was the Zeiss UR-Objektiv 70mm f/1.0. The design documents were apparently recovered during Operation Paperclip from the Zeiss Jena factory before the factory was occupied by the Soviets and then provided to the new Zeiss Oberkochen.

The design went through four prototypes before achieving the final configuration [1]. The final scheme was optimized on an IBM 7090, which had been in operation since the late 1950s. The lenses were used on a modified Hasselblad camera.

  1. Glatzel, E., “New developments in the field of photographic objectives”, British Journal of Photography, 117, pp.426-443 (1970)

Further reading:

The early days of image processing: To Mars and beyond

After Ranger 7, NASA moved on to Mars, deploying Mariner 4 in November 1964. It was the first probe to send signals back to Earth in digital form, which was necessitated by the fact that the signals had to travel 216 million km back to earth. The receiver on board could send and receive data via the low- and high-gain antennas at 8⅓ or 33⅓ bits-per-second. So at the low end, one pixel (8-bit) per second. All images were transmitted twice to insure no data were missing or corrupt. In 1965, JPL established the Image Processing Laboratory (IPL).

The next series of lunar probes, Surveyor, were also analog (due to construction being too advanced to make changes), providing some 87,000 images for processing by IPL. The Mariner images also contained noise artifacts that made them look as if they were printed on “herringbone tweed”. It was Thomas Rindfleisch of IPL who applied nonlinear algebra, creating a program called Despike – it performed a 2D Fourier transform to create a frequency spectrum with spikes representing the noise elements, which could then be isolated, removed and the data transformed back into an image.

Below is an example of this process applied to an image from Mariner 9 taken in 1971 (PIA02999), containing a herringbone type artifact (Figure 1). The image is processed using a Fast Fourier Transform (FFT – see examples FFT1, FFT2, FFT3) in ImageJ.

Fig.1: Image before (left) and after (right) FFT processing

Applying a FFT to the original image, we obtain a power spectrum (PS), which shows differing components of the image. By enhancing the power spectrum (Figure 2) we are able to look for peaks pertaining to the feature of interest. In this case the vertical herringbone artifacts will appear as peaks in the horizontal dimension of the PS. Now in ImageJ these peaks can be removed from the power spectrum, (setting them to black), effectively filtering out those frequencies (Figure 3). By applying the Inverse FFT to the modified power spectrum, we obtain an image with the herringbone artifacts removed (Figure 1, right).

Fig.2: Power spectrum (enhanced to show peaks)
Fig.3: Power spectrum with frequencies to be filtered out marked in black.

Research then moved to applying the image enhancement techniques developed at IPL to biomedical problems. Robert Selzer processed chest and skull x-rays resulting in improved visibility of blood vessels. It was the National Institutes of Health (NIH) that ended up funding ongoing work in biomedical image processing. Many fields were not using image processing because of the vast amounts of data involved. Limitations were not posed by algorithms, but rather hardware bottlenecks.

The early days of image processing : the 1960s lunar probes

Some people probably think image processing was designed for digital cameras (or to add filters to selfies), but in reality many of the basic algorithms we take for-granted today (e.g. improving the sharpness of images) evolved in the 1960s with the NASA space program. The space age began in earnest in 1957 with the USSR’s launch of Sputnik I, the first man-made satellite to successfully orbit Earth. A string of Soviet successes lead to Luna III, which in 1959 transmitted back to Earth the first images ever seen of the far side of the moon. The probe was equipped with an imaging system comprised of a 35mm dual-lens camera, an automatic film processing unit, and a scanner. The camera sported a 200mm f/5.6, and a 500mm f/9.5 lens, and carried temperature and radiation resistant 35mm isochrome film. Luna III took 29 photographs over a 40-minute period, covering 70% of the far side, however only 17 of the images were transmitted back to earth. The images were low-resolution, and noisy.

The first image obtained from the Soviet Luna III probe on October 7, 1959 (29 photos were taken of the dark side of the moon).

In response to the Soviet advances, NASA’s Jet Propulsion Lab (JPL) developed the Ranger series of probes, designed to return photographs and data from the moon. Many of the early probes were a disaster. Two failed to leave Earth orbit, one crashed onto the moon, and two left Earth orbit but missed the moon. Ranger 6 got to the moon, but its television cameras failed to turn on, so not a single image could be transmitted back to earth. Ranger 7 was the last hope for the program. On July 31, 1964 Ranger 7 neared its lunar destination, and in the 17 minutes before it impacted the lunar surface it relayed the first detailed images of the moon, 4,316 of them, back to JPL.

Image processing was not really considered in the planning for the early space missions, and had to gain acceptance. The development of the early stages of image processing was led by Robert Nathan. Nathan received a PhD in crystallography in 1952, and by 1955 found himself running CalTech’s computer centre. In 1959 he moved to JPL to help develop equipment to map the moon. When he viewed pictures from the Luna III probe he remarked “I was certain we could do much better“, and “It was quite clear that extraneous noise had distorted their pictures and severely handicapped analysis” [1].

The cameras† used on the Ranger were Vidicon television cameras produced by RCA. The pictures were transmitted from space in analog form, but enhancing them would be difficult if they remained in analog. It was Nathan who suggested digitizing the analog video signals, and adapting 1D signal processing techniques to process the 2D images. Frederick Billingsley and Roger Brandt of JPL devised a Video Film Converter (VFC) that was used to transform the analog video signals into digital data (which was 6-bit, 64 gray levels).

The images had a number of issues. First there was the geometric distortion. The beam that swept electrons across the face of the tube in the spacecraft’s camera moved at nonuniform rates that varied from the beam on the playback tube reproducing the image on Earth. This resulted in images that were stretched or distorted. A second problem was that of photometric nonlinearity. The cameras had a tendency to display brightness in the centre, and a darkness around the edge which was caused by a nonuniform response of the phosphor on the tube’s surface. Thirdly, there was an oscillation in the electronics of the camera which was “bleeding” into the video signal, causing a visible period noise pattern. Lastly there was scan-line noise, which was the nonuniform response of the camera with respect to successive scan lines (the noise is generated at right-angles to the scan). Nathan and the JPL team designed a series of algorithms to correct for the limitations of the camera. The image processing algorithms [2] were programmed on JPL’s IBM 7094, likely in the programming language Fortran.

  • The geometric distortion was corrected using a “rubber sheeting” algorithm that stretched the images to match a pre-flight calibration.
  • The photometric nonlinearity was calculated before flight, and filtered from the images.
  • The oscillation noise was removed by isolating the noise on a featureless portion of the image, created a filter, and subtracted the pattern from the rest of the image.
  • The scan-line noise was removed using a form of mean filtering.

Ranger VII was followed by the successful missions of Ranger VIII and Ranger IX. The image processing algorithms were used to successfully process 17,259 images of the moon from Rangers 7, 8, and 9 (the link includes the images and documentation from the Ranger missions). Nathan and his team also developed other algorithms which dealt with random-noise removal, Sine-wave correction.

[1] NASA Release 1966-0402
[2] Nathan, R., “Digital Video-Data Handling”, NASA Technical Report No.32-877 (1966)
[3] Computers in Spaceflight: The NASA Experience, Making New Reality: Computers in Simulations and Image Processing.

† The Ranger missions used six cameras, two wide-angle and four narrow angle.

  • Camera A was a 25mm f/1 with a FOV of 25×25° and a Vidicon target area of 11×11mm.
  • Camera B was a 76mm f/2 with a FOV of 8.4×8.4° and a Vidicon target area of 11×11mm.
  • Camera P used two type A and two type B cameras with a Vidicon target area of 2.8×2.8mm.