By Laura Stillwagon
The Operation: How the Qi Works
The Qidenus works by using remote shooting technology provided by Canon’s EOS Utility. EOS Utility employs tethered shooting which connects DSLRs to a computer so images can be taken by using a computer at a distance from the cameras. Freshly taken images are immediately viewed and stored in the computer and related image processing software. Paired alongside with the Qi’s own operation software, QiDrive, both cameras are used simultaneously to capture a bound item with each having their own page to capture. And despite the name, the SMART Book Scan does not use smart technologies, but it does employ the use of fairly new processing technology to coordinate the necessary software and function of image capturing.
The Canon EOS 5DS digital-SLRs are mounted at high angles inside the Qi so they are pointing directly at the bed at which the items are set for capture. The bed itself lays inside the machine under a set of LED lights, and it consists of two panels or leaves that overlap to form a 100-degree angle. At a constant angle, bound items that may be delicate or tightly bound are well supported, allowing any detail on the pages to be viewed without much distortion as the result of the curve of an open bound item. To further secure the item, and to ensure pages lay flat, a large piece of glass that is parallel in size and angle is held on a vertical track so that it may be pulled down to rest on the item laying on the bed. The construction of the Qi itself also allows for a small light environment to be maintained; with the LED lights and the DSLRs set within the ceiling of the machine behind the walls and awning of the hooded structure, the cameras and light are relatively unaffected by any interrupting light from the room in which the machine is kept. In this way, Qi-users are able to reach inside the Qi to adjust the focus of the cameras under the cover of the roof of the machine and to make minor adjustments to the bed in order to best support a bound item of any given size.
Both Sides of the Moon: Perception
Besides learning more about image-capture settings and exposure, I found it necessary to take into consideration light and perception. There are many variables that contribute to what ends up seen in frame of an image and the exposure. From the photographer, the camera, the lens, the light environment, to the subject, each point allows for distortion from what is actually present in front of the camera lens. Beginning with the photographer, all us humans (for the sake of argument) share the same construction in our eyes in order to see what is around us, we all have differences in perspective (in the psycho-neurological sense) that accounts for much of the variability across the view we have of the world.
For the most part, all our eyes take in and transcribe light from the world in the same manner. It is only when the acquired information is translated by neurological processes that differences arise due to mental associations and the like. Just as we perceive things differently, cameras do as well, especially with the innovations in sensors, algorithms, and image post-processing. But there are limits to technologies and skills. Settings that are true to the light environment will yield highly detailed images indicative of reality. And images on the Qi do just that: users can magnify digital images of pages in a book or journal on their desktops and see things they would not normally see merely viewing the item in person.
Optimum image quality achieved by the Qi is a result of EOS and CMOS technology. Over the years, the Electro Optical System (EOS) model and complementary metal-oxide semiconductor (CMOS) technology have increased the sensitivity of Canon digital-SLRs to optimize focus and performance. These two developments plus complementary processors help transcribe and translate the light information from the lens to the image file producing clear images. Not immune to the prevalence of digital technology and innovation of today, Canon EOS cameras using CMOS sensors use computer technology to both broaden and sharpen the function of digital-SLRs with better data uses.
|These images depict the differences in quality determined over time with the first image being produced with the settings established when the Qi was set up; the second image being one of the results of experimental changes in camera-capture settings; and the third the veritable surrogate more true to how the item looks under the Qi’s LED light. The camera-capture settings used to produce these images are as follows: image one had shutter speed of 1/5, aperture F10, white balance (WB) of K (or color, 6000K), an ISO of 100, evaluative metering, and standard picture style; the second image had 1/5, F7.1, overcast (WB), 200 ISO, evaluative metering, and landscape picture style; the final image had 1/5, F7.1, overcast, 200 ISO, center-weight metering, and fine detail picture style.|
A part of my research in understanding DSLRs, EOS, photography, and light was not to simply learn definitions of things, or what happens when the dial on the camera is turned to another setting, but also what experts determined were the best settings for the light environment of the Qi, or one similar. With the Qi’s LED lights, I had to determine what temperature reading the lights had and where the focus of the lenses would be the strongest. LED lights are interpreted by camera sensors, generally, like the different varieties of sunlight: sunlight, sunrise, cloudiness, overcast, etc. This discovery greatly improved the images which often either had a golden tint, or sepia like quality to them, or a blue-ish tint. Another more important change in the settings was metering, which determined which areas inside the frame will be given more weight or priority for exposure as a result of what is in focus. Picture style and metering tend to go hand in hand with metering determining the focus and level of detail within the frame and picture style the hues of the colors therein.
The image capture settings now yield images that require little to no post-processing or major editing that adjusts the exposure or look of the image beyond cropping or tilting. Thus, the images, no matter the file, retain RAW image quality, or the most detailed image first generated by a camera and stored in it.
In the End
No image can replace the experience of observing a bound item physically, but the extent of visual detail that is observed in physical reality can be attained with the Qidenus. Digitization and digital archiving have long struggled with and debated over the creation of ‘surrogates’, or the image file counterparts of items, and their legitimacy. Those who perform digitization understand the purist perspective that maintains that any representation of the item is nowhere near as close to providing the wealth of information as the original. But, they consider the deterioration of the item as well as the possibility of wider accessibility in digital versions as more valuable reasons to continue digitization, particularly as technologies that access digital things and perform digitization improve. The digital initiatives taken here in Digital Collections share these concerns, but still carry on knowing their services benefit the University of South Carolina and surrounding communities. The Qidenus is simply another tool fostering the awareness and increased accessibility of rare and valuable archival items.