Monthly Archives: January 2016

Anyone with the latest iPhone or Android knows that fingerprint scanning has officially hit the mainstream. But how does that process work, and how accurate can it really be? Here’s a closer look at fingerprint scanning and how it works.

fingerprint scannerFingerprint scanning falls under the umbrella of biometrics, the measure of your physical form and/or behavioral habits, generally for the sake of identifying you before you are granted privileged access to something. Other examples of biometrics include handwriting, voiceprints, facial recognition, and hand structure scanning.

It’s said that humans have tiny ridges and valleys all along the inside surface of their hands for the sake of friction; our fingerprints are meant to act as treads that allow for us to climb and enjoy an improved grip on the things we carry. Who really knows though. Regardless, we have fingerprints, and they happen to be different for each of us due to both genetic and environmental factors.

That’s extremely useful for security and law enforcement in general. With a fingerprint scanner, you can know if anyone whose fingerprints are on record touched a particular object. Finger print scanners can get an image of someone’s finger in many ways, but the two most common methods are optical scanning and capacitance scanning.

Optical scanners use a charged coupled device (CCD), which is the same light sensor system commonly found in digital cameras and camcorders. A CCD is just a collection of light-sensitive diodes called photosites that receive light photons and generate an electrical signal in response. When you place your finger on the glass plate of a fingerprint scanner, the scanner’s light source illuminates the ridges of your finger and the CCD generates an inverted picture of your fingerprint in which the ridges are lighter and the valleys are darker. So long as the image is sufficiently bright and crisp, the scanner will then proceed to compare the print to other prints on file.

capcitive fingerprint scanningCapacitive fingerprint scanners function slightly differently but create the same output. They use electrical current to sense the print instead of light, so they’re built with one or more semiconductor chips containing and array of cells which are each made up of two conductor plates covered with an insulating layer. A capacitor is formed out of these plates, plus the surface of the finger acts as the third capacitor plate. Basically, the scanner reads how the voltage outputs coming from the finger are different due to the difference in distance from the valleys and ridges to the capacitors and generates from this an image of a fingerprint. These systems are apparently harder to trick and can be built to be more compact.

Once the fingerprint registers, it must be analyzed to see if it matches with any other prints recorded in the system. This occurs by comparing specific features of fingerprints referred to as the minutiae. These points are generally areas where ridge lines end or where one ridge splits into two. To get a match, the scanner system simply has to find a sufficient number of minute patterns that the two prints have in common.

 

You are probably aware that you have a computer and a monitor, the most-used output used with personal computers.

But how do these two components work together? This article will help you to understand the basics behind the answer to this question.

As you can likely imagine, when you type a letter on your keyboard and see it appear as a text graphic on your monitor’s display, this has occurred through the sending of signals across multiple aspects of your device. This signal can either be in analog or digital format.

If it’s in analog format, you likely are using a CRT or cathode ray tube display. Analog format implies the use of continuous electrical signals or waves to send information as opposed to 0s and 1s, which comprise digital signals.

Digital signals are much more common among computers and a computer and video adapter is often used to convert digital data into analog format for CRT displays. A video adapter is simply an expansion card or component that converts display information into an analog signal that can be sent to the monitor. It’s often called the graphics adapter, video card, or graphics card.

16 bitOnce the graphics card converts the digital information from your computer into analog form, that information travels through a VGA cable that connects to the back of the computer to an analog connector known as a D-Sub connector. These connectors tend to have 15 pins in three rows, each of which with their own uses. The connector has separate lines for red, blue and green color signals as well as other pins. Normal televisions just convert all of these pins into one composite video signal, but this is abnormal for a computer. In fact, the separation of all these signals in a computer monitor’s connector is responsible for the monitor’s superior resolution.

You can also use a DVI connection between your computer and display monitor. DVI stands for Digital Video Interface and was developed in the interest of foregoing the digital to analog conversion process. LCD monitors support DVI and work in a digital mode. Some can still accept analog information, but need to convert it into digital information before it can be displayed correctly.

bit colorOnce the appropriate signals are making it to your computer’s monitor, you’re ready to start thinking about color depth. The more colors your monitor can display, the brighter and more beautiful the picture (and the more expensive the equipment). To talk about what makes one display capable of creating more colors than another, it’s important to discuss bit depth.

The amount of bits used to describe a pixel is known as its bit depth. A display that 7operates in SVGA (Super VGA) can display a maximum of 16,777,216 colors because it can process a 24-bit-long description of a pixel. This 24-bit bit depth can be broken down into three groups of 8 bits. One group of bits is dedicated to each additive primary color: red, blue, and green. The 24-bit bit depth is known as true color because it can produce all 10,000,000 colors visible to the human eye.

There is even 32-bit bit depth. In this case, the extra eight bits are used in animation and video games to achieve effects like translucency.

-->