Computeruser.com
Latest News

How many megapixels are enough?

Time to go digital Graphics Advisor hed: Time to go digital dek: one in five households will soon have a digicam. should yours? by Joe Farace

According to a recent imaging trend survey, one out of every five American households will own a digital camera by the time this article is published. That’s a lot of digicams, which explains why the market for these cameras is so hot. Part of this demand is fueled by the most basic desire of all picture makers: The need to share images with other people. Instead of sticking 4-by-6 snapshots into an envelope and round-robin mailing them to friends and family, the Internet provides a gathering place where photographs can be seen by relatives and acquaintances no matter where they’re located. Photo-sharing sites not only allow you to post images, they also provide a handy location where anybody can order a print for their own use, saving you the trouble of repeated trips to the local one-hour lab to get reprints made for Aunt Midge in Wisconsin.

How many megapixels are enough?

With the popularity of digicams, there’s still confusion about how much resolution is enough. Contrary to Mies van der Rohe’s axiom about design–less is more–more is always more when it comes to resolution. Five megapixels seems to be an optimum size for most applications. I’ve seen stunning output from Nikon’s D1x 5.47 megapixel camera when printed at 24 by 36 inches using an Epson Stylus Pro 10000 large-format inkjet printer, and fine detail was impressive even upon close examination. At some point, cameras will reach a point of diminishing returns; more megapixels may be provided, but nobody will be able to tell the difference when looking at output.

No matter what their maximum resolution may be, most digital cameras offer more than one resolution option, allowing you to select from several choices, depending on how the image will be used. In this way, changes in resolution can be compared to image formats in traditional film cameras. An image printed at a large size will look much better if it has been photographed on 4-by-5 or 8-by-10 sheet film than it would be if made with a 35mm or Advanced Photo System camera. The highest-quality digital image is made using the camera’s maximum resolution setting, and is your best choice if you want to make 5-by-7-inch or 8-by-10-inch prints. Good-quality images are, more often than not, 640-by-480 pixel-sized photographs; these are more than adequate for use on the Web, but may not make the best prints. Some digicams, such as those from Epson and HP, have an image-quality button that uses a star analogy–the more stars displayed on its LCD screen, the higher the image quality.

The right format

Just as important as image resolution is what file format is used to store the images. Most cameras store images in the compressed JPEG format. Since JPEG is an inherently “lossy” format, some image data is invariably lost during the process of compression, so the best quality will be obtained by using the lowest compression ratios–or even better, no compression. Some digicams, such as Olympus’s Camedia E-10, let you store images in a RAW format that delivers every pixel that was captured, undiluted by compression. Other cameras let you save images using the TIFF format, making them immediately available for use by an image-editing program without requiring a RAW plug-in for importing.

It’s always important to match the resolution of the camera to how the image will be reproduced. Image resolution can be less critical when printing with an inkjet desktop printer, and almost any contemporary digital camera will produce acceptable snapshot-sized images. Depending on the compression, a 3.3-megapixel camera will produce adequate quality for an 8-by-10-inch-maybe even an 11-by-14-inch print if the other components in its imaging path are up to par. The key to making your images look good is matching the image size to how large it will be when it’s output.

What’s a megapixel?

If you’ve seen digicam ads, you already know that a camera’s resolution is rated in megapixels. Pixel is short for picture element. A megapixel, naturally, is one million pixels. A digital photograph’s resolution is measured by the image’s height and width as measured in pixels. The higher the resolution–the more pixels it has–the better the visual quality. To determine how many megapixels a camera can produce, multiply the image’s length by its width. In the case of a camera that has 2,240-by-1,680 resolution, it would be 3,763,200 pixels or 3.7 megapixels. As in many fields, some manufacturers interpret that rating liberally. While you can get analytical about it, I prefer to interpret megapixel ratings much as I do automotive horsepower ratings. The hybrid/electric Honda Insight delivers 68 economical horsepower while a Dodge Viper produces 450 horsepower. Those ratings tell me the Viper will blow the doors of the Honda, but also that both cars can easily take you to work or shopping.

Some manufacturers, especially of low-end cameras, tend to slightly overstate their camera’s capabilities or advertise interpolated resolutions that use a variety of trademarked names. Just as when evaluating film or flatbed scanners, you should look for the optical-resolution (or raw) specifications of a camera when making a purchase decision. If you want to get down and techie with megapixels, visit the Megapixel Myths Web site.

Pass the chips

Most digital cameras use CCD (Charged Coupled Device) chips to capture light and convert it into images. In fact, some inexpensive cameras use camcorder chips, which have rectangular pixels. In order to be displayed on a monitor, these pixels have to be converted into square pixels. This conversion can produce a noisy image, which shows up as grain or in the form of posterized areas. Square pixels are typically found on higher-end digital cameras, but more and more inexpensive models use them as well.

Some cameras, such as Canon’s EOS D30, use a CMOS (pronounced “sea moss”) chip. In the past, CMOS imaging chips were inferior to CCDs, limiting this technology’s ability to penetrate the imager market. But more companies are using CMOS because of the inherent advantages of its lower cost and lower battery consumption. CCD chips have high photoelectric efficiency, which permit pixels to be tightly packed and produces high-resolution arrays on silicon dies but have higher manufacturing costs, high power consumption, and lower production yields. CMOS sensors are built using the same kind of semiconductor process as microprocessor chips, and many functions–such as digital signal processing, logic, and microcontrollers–can be integrated onto a single chip. The down side of CMOS is that performance in low light is not as good as with an equivalent-sized CCD.

With pix you get eggroll

One of the biggest differences between film and digital photography is start-up cost. With a film camera, a local store will make prints for a modest charge. But when you purchase a digital camera you also buy a digital darkroom. For most computer users, this may mean a few upgrades along with some inexpensive peripherals.

Depending on your digital imaging goals, if you’re starting from scratch, the cost can easily go as high as $6,500. While some digital minilabs now offer direct printing from SmartMedia, CompactFlash, and even CD-ROM media, this kind of service is far from universal. In the meantime, if you want to make 14 wallet-sized prints of that digital file of Carleigh in her Christmas dress, you’re on your own. This might be fine for many users, but until digital minilabs are the norm, digicams won’t replace film cameras any time soon.

Leave a comment

seks shop - izolasyon
basic theory test book basic theory test