Computer tools

Last week, Google revealed that it would be experimenting with post-quantum cryptography in its browser, Google Chrome. The experiment will allow for a small proportion of connections between Google’s servers and Chrome on the desktop to use a post-quantum key-exchange algorithm as well as the elliptic-curve key-exchange algorithm that is already in wide use.

qcThe fundamental concept driving the experiment is that large quantum computers, which threaten to encompass a total revolution in computing history, may be able to break currently used security algorithms. The tech mogul’s philosophy is to be ready for these hacking attempts before quantum computers are built or propagated widely.

Google’s experiment utilizes an algorithm called New Hope. Google considers it the post-quantum key-exchange with the most potential after having investigated a variety of options over the past year. Google hopes to allow for its best engineers to gain real-world experience with the larger data structures that will likely be required given that post-quantum algorithms become more widespread.

According to Google, the company’s decision to layer the post-quantum algorithm over the existing algorithm will allow for the company to conduct its experiment without affecting its users’ security. The company also pledged that it would stop its experiment after collecting information for two years as it does not intend to make its post-quantum algorithm the standard.

“Google’s investigating the quantum computing resistance of New Hope for a robust key exchange algorithm,” explained VP of product at Rubicon Labs Rod Schultz. While the company’s announcement “doesn’t herald anything new,” “it goes further to confirm that quantum computing-resistant algorithms will provide significant competitive advantage for anyone who has the IP for them.”

“You can view this investigation as [one] in Google’s core competency,” Schultz continued,” and also as a hedge and insurance policy around the catastrophic impact to encryption that quantum computing is predicted to have.”

Rob Enderle, principal analyst at the Enderle Group, as usual chimed in on the phenomenon:

“I doubt that we can develop a defense that works before we actually have quantum computers, because there’s no way to actually test something against a platform that doesn’t exist… Still, this approach could be better than existing methods, making it worthwhile to attempt.”

Jim McGregor, principal analyst at Tirias Research, stated that “Cybercriminals and government-sponsored organizations are looking at this technology too.”

“No one in the industry believes that any software solution is unbreakable,” he concluded.

titanPost-quantum cryptography has been of interest to cryptographers for years. In fact, the seventh annual international conference for post-quantum cryptography took place in Fukuoka, Japan just a few months ago. The United States’ NSA has published information on the subject, and the United States National Institute of Standards and Technology published a report on post-quantum cryptography just last spring. Along with the report, the agency stated that it would be in open collaboration with the public to develop and vet post-quantum crypto algorithms.

“Gaining access to powerful computing resources is not difficult anymore,” stated Schultz. “The bigger challenge will be in updating the current technology that’s prolific today with QC-resistant technology. It will only take a single quantum computer in the hands of the wrong person to destroy the foundation of encryption today.”

Apple Inc. may currently stand among the most powerful tech moguls of Silicon Valley, but that they’re not the only big dog on block. Plenty their competitors have been barking up the right trees for years and they rightly have a bone to pick with Steve Jobs’ brainchild.

apple3Nothing is a clearer indicator of Apple’s vulnerable status at the precipice of failure than the company’s decision to put out a second generation of the operating system run on Apple Watch. The Apple Watch ranks among the dumbest accessories ever made for health-conscious 30-year-olds, on par with the little harnesses on your arm that make it possible to run with an iPod. Some industry analysts have alleged that Steve Jobs may be haunting Tim Cook, perhaps forcing him to move forward with an idea stolen from Star Trek.

The Apple Watch is useless for a variety of reasons. The first and perhaps most important reason is that it cannot be used without the help of a better device that does all of the same things but on a larger and more accessible screen. The Apple Watch also is made to be worn on a part of your body that gets wet every time you wash your hands, a “big no-no for computers” as described by industry expert Jackie Robinson. Finally, the Apple Watch pretends like you can send texts on it but it’s basically a beeper and can only be written on or used with a stylus, which has the sex appeal of wearing transition glasses, even in tech circles. The end product is an overly small computer that shames all of the previous personal computers that came before it. 

Industry futurist Jake Guarino has suggested that Apple may be creating more wearable tech for only people with Eagle eyes to read off of:

apple4“I wouldn’t be surprised to see a patent filed for an Apple ring, or perhaps even an Apple naval stud that can also act as a flashlight,” offered Guarino over a cup of yerba matte. “Whether the naval stud will have a flashlight app is up for debate.”

Apple also recently embarrassed itself when CEO Tim Cook publically refused to cooperate with the federal government, seeming to play the hero by taking a firm stance regarding his clients’ privacy. Unfortunately for Cook, the government simply side stepped his efforts to secure Apple encryption and proved that it could hack into iPhones whenever it wanted. Cook’s argument was boiled down to an effort to sanctify his brand at the end of the day, and no one cares about what was once an uproar.

Apple recently suffered its first down quarter since 2001 and has been the loser of a variety of patent cases in China in the past year or so. It’s unlikely that Apple will be able to sell its iPhone 6 or iPhone 6S models in Beijing now that a recent ruling has been made regarding the phone’s illegal similarity to an existing company’s product. Will there be a downfall for this tech king, and if so when? Who’s to know, but much will likely become clear when the tech bubble bursts.

There is a gentleman by the name of Sir Terry Matthews who made an immense fortune in the tech sector with a line of companies that included Mitel Corp, Corel Corp, and of coarse New bridge Networks, which he sold in the blockbuster sale of 2000 for $7 billion in stock. Today Sir Terry sets his site on his roots, and is returning to Wales for what he believes may be his greatest challenge and potential for money. So the stakes are high, to say the least.

This venture is to act as a kind of voodoo doctor and rise the long deceased British steel industry from the grave and help rebuild an industry that has laid dormant for well over a quarter century. How is he going to do this you may ask? His plan is simple: provide the steel industry with big data, and tech relevant infrastructure like the world has never seen, and provide a superior product through tech quality control of an industry that is extremely antiquated in how they go about the actually manufacturing process from raw material thorough the production of usable structural building members. They are also ensuring a complete security overhaul. asdfasdf

Sir Terry has created Excalibur Steel UK Ltd. and he’s finishing up rounds of investment to buy the floundering Tata steel mill in Port Taibot near Swansea. Tata has been the only player in England to put up the British operations for sale in March, the reason being that the division (which includes three steel making facilities) were all painfully inefficient. The Port Talbot plant was the largest and employed upwards of 15,000 people directly.

The biggest roadblock for any buyer in this regard is merely the length of time that is required to execute an acquisition of this scale. For instance, the integrated plant, which makes strip steel used in auto manufacturing construction and appliances, is outdated and losing a whopping 2 million dollars a day by some accounts. Energy costs are twice as high as other places in Europe and steel prices have plummeted because of oversupply from China.

“I’ve never been one to shy away from some kind of fight,” Said Sir Terry to his investors. He got the capital to ensure upwards of 200 million dollars in initial investments. The interesting thing is that the actual equipment they got for that price tap could fit into a semi truck or 2. It’s really not very much when you consider what is implied when you say you are going to reboot a steel mill. What they are really getting with that price tag is RnD, and a lot of it. They are going to overhaul the entire computing system and by taking in a huge amount of variables from thousands of adsfasdfasdfasdfsources across the world, they believe they are going to be able to make this a much or profitable ventures.

This is going to be one of the most 180 approaches to an already established industry, but you can rest assured that if this pays off as most of Sir Terry’s ventures do, then there will be many more who follow suit.

According to U.S. FCC Commissioner Michael O’Rielly, the government may need to investigate Netflix’s practice of throttling video content delivery to customers using mobile devices.

That said, O’Reilly was quick to bring light to the fact that Netflix’s video throttling was not a violation of the FEC’s Net neutrality rules. Neflix fecently announced plans to offer a data saver feature for mobile apps beginning in May.

netflix throttlingnetflix throttlingnetflix throttlingnetNetflix has made a clear stand as a proponent of Net neutrality and admitted that it secretly throttled back the speed of its video customers of Verizon and AT&T without actually disclosing its policy to the mobile carriers used by its own customers, according to The Wall Street Journal. The news surfaced after T-Mobile CEO John Legere accused the two rival carriers of throttling back their speeds, not knowing that Netflix was actually responsible for the throttling.

Netflix has generally poised itself as a company against restrictive data caps, as it considers such caps as negatively affecting consumers and the internet’s development in general. However, it did set a default rate at 600 kilobits per second as a way to strike a tentative balance between its consumers’ quality video experience and the potentially excessive charges from mobile carriers for its customers.

Spokesperson Anne Marie Squeo elaborated on Netflix’s perspective, which asserts that customers don’t actually need the same resolution on their phones as they do on large-screen televisions or computers:

“However,” she stated, “we recognize some members may be less sensitive to data caps or subscribe to mobile data plans from carriers that don’t levy penalties for exceeding caps.”

The American Cable Association last week asked that the Federal Communications Commission launch some kind of investigation into the practices of edge providers.

“ACA has said all along that the Federal Communications Commission’s appraoch to Net neutrality is horribly one-sided and unfair because it leaves consumers unprotected from the actions of edge providers that block and throttle lawful traffic,” explained ACA President Matthew Polka.

net“While we’re disappointed to hear that Netflix has been throttling its videos for AT&T and Verizon customers, I think it’s important to realize that this wasn’t a violation of Net neutrality, since it was the edge provider itself who made the decision to throttle its own traffic,” stated Jeremy Gillula, staff technologist at the Electronic Frontier Foundation. Gillula believed that Netflix had a responsibility to disclose its throttle policy earlier and more transparently, adding that all companies should have to be straight-forward with their customers.

Others believe that people who take issue with Netflix’s throttling are actually blowing the issue out of proportion, given that the real threat involves the fact that Internet service providers are coming between a provider like Netflix and its customers.

Christopher Mitchell sees the issue this way, stating that “In this case, Netflix is making choices regarding its own customers and is not impacting any other business. So I was not upset or worried learning that Netflix is going this,” he concluded. Mitchell is the director of the Community Broadband Networks Initiative at the Institute for Local Self-Reliance.

On this most recent Tuesday Microsoft announced the plans to unveil a version of its dynamic database production SQL Server 2016 for Linux which should be an important innovation in the history of computers.

Scott Guthrie, executive vice president for Microsoft’s cloud and enterprise group said this week that “Bringing SQL Server to Linux is another way we are making our products and new innovations more accessible to a broader set of users and meeting them where they are,” not to mention that this will be able to relieved a consistent data platform with access across through and yes you guess it over the SQL Server and Linux. also on premises he noted. An insider goes on to note that.” Customers will be able to build and deploy more of their applications on a single data management and business analythmmic platform,” that person is Jennifer Reynolds.

Customers also can be allowed to leverage their existing tools talented and resourceful for more of throe applications the company has long since noted. one major comment could consider that With Microsoft bringing SQL Server to Linux, enterprises will be able to further integrate disparate platforms to deliver on the promise of the hybrid cloud, while increasing the choice that developers, customers and partners have as open source continues to form the foundation of the platforms of the future It’s about capturing opportunities on Linux servers that Microsoft today doesn’t have any offerings for,” said Mike Ferris, Red Hats director of business architecture.

A fair number of Microsoft shops aren’t pure Microsoft anymore, so increasingly companies that are deploying Linux in their infrastructure have had to look for a mixed database environment. Microsoft is trying to solve that for them,” Gold said.

This could amount to what some could refer to as a coup for Linux on the market and very much the technology and superiority of the field at hand when we consider what this announcement will mean for 2 months form now but 2 years from now when the vagueness and ambiguity of the actions we put forth today will be felt and we will know if this was some kind of fun experiment wherein men shot in the dark at a reality for tomorrow or will be we there and know we did good.

One person has said  on this potential coup that “There was data1a large internal battle over whether applications should be decoupled from Windows. Now they realize they have to be more flexible in a changing environment,” he told Linux insider.

This may very well be the case but we will see that we don’t really know and should also consider what IDC’s gillen noted It gives Linux even more credibility than it already has,”If Microsoft is convinced that Linux is a platform that needs to be supported, what does that say about Linux? It says it’s a respected and powerful platform.” this is an interesting take on the issue at hand and may prove to be more efficient for the research in time spent as well as output and function.

 

 

Like quantum computing, the IoT (Internet of Things) is drastically changing the way that people view and interact with computers. But what is it?

“The Internet of Things” became a tech buzzphrase when Kevin Ashton (cofounder of MIT’s Auto ID Center) first mentioned it in a presentation he made to Procter & Gamble, way the heck back in 1999. One decade later, Ashton elaborates on the concept in an article he wrote for the Radio Frequency Identification (RFID) Journal:

IoT“Today’s computers- and, therefore, the internet- are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes of data available on the Internet were first captured and created by human beings- by typing, pressing a record button, taking a digital picture of scanning a bar code. Conventional diagrams of the Internet include servers and routers and so on, but they leave out the most numerous and important routers of all: people. The problem is, people have limited time, attention and accuracy- all of which mean they are not very good at capturing data about things in the real world.”

“If we had computers that knew everything there was to know about things- using data they gathered without any help from us- we could be able to track and count everything, and greatly reduce waste, loss and cost,” he continued. “We need to empower computers with their own means of gathering information, so they can see, hear and smell the world for themselves, in all its random glory. RFID and sensor technology enable computers to observe, identify, and understand the world- without the limitations of human-entered data.”

Let’s back up for a second. For the record, a member of the Internet of Things can be a lot of different kinds of “things;” a person, an animal, a vehicle, man-made things, non-man-made things, anything that has been assigned an IP address and anything provided with the ability to transfer data over a network.

IoT2The Fitbit is an excellent example. Among other things, the Fitbit is a pedometer that tracks the amount of steps taken by wearers. That information is then  sent to the user’s Fitbit account, so that the user is enabled to track the changes of his or her daily movement. The Fitbit therefore occupies a space in the Internet of Things, chiefly because it transfers data, over a network, to be accessed by other devices.

Ashton believes that products like the Fitbit scrape only the tip of the Internet of Things iceberg: “It’s not just a ‘bar code on steroids’ or a way to speed up toll roads, and we must never allow our vision to shrink to that scale. The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.”

That said, the Internet of Things has already come a long way from its humble beginnings as a 1980s coke machine at Carnegie Mellon University. 

Anyone with the latest iPhone or Android knows that fingerprint scanning has officially hit the mainstream. But how does that process work, and how accurate can it really be? Here’s a closer look at fingerprint scanning and how it works.

fingerprint scannerFingerprint scanning falls under the umbrella of biometrics, the measure of your physical form and/or behavioral habits, generally for the sake of identifying you before you are granted privileged access to something. Other examples of biometrics include handwriting, voiceprints, facial recognition, and hand structure scanning.

It’s said that humans have tiny ridges and valleys all along the inside surface of their hands for the sake of friction; our fingerprints are meant to act as treads that allow for us to climb and enjoy an improved grip on the things we carry. Who really knows though. Regardless, we have fingerprints, and they happen to be different for each of us due to both genetic and environmental factors.

That’s extremely useful for security and law enforcement in general. With a fingerprint scanner, you can know if anyone whose fingerprints are on record touched a particular object. Finger print scanners can get an image of someone’s finger in many ways, but the two most common methods are optical scanning and capacitance scanning.

Optical scanners use a charged coupled device (CCD), which is the same light sensor system commonly found in digital cameras and camcorders. A CCD is just a collection of light-sensitive diodes called photosites that receive light photons and generate an electrical signal in response. When you place your finger on the glass plate of a fingerprint scanner, the scanner’s light source illuminates the ridges of your finger and the CCD generates an inverted picture of your fingerprint in which the ridges are lighter and the valleys are darker. So long as the image is sufficiently bright and crisp, the scanner will then proceed to compare the print to other prints on file.

capcitive fingerprint scanningCapacitive fingerprint scanners function slightly differently but create the same output. They use electrical current to sense the print instead of light, so they’re built with one or more semiconductor chips containing and array of cells which are each made up of two conductor plates covered with an insulating layer. A capacitor is formed out of these plates, plus the surface of the finger acts as the third capacitor plate. Basically, the scanner reads how the voltage outputs coming from the finger are different due to the difference in distance from the valleys and ridges to the capacitors and generates from this an image of a fingerprint. These systems are apparently harder to trick and can be built to be more compact.

Once the fingerprint registers, it must be analyzed to see if it matches with any other prints recorded in the system. This occurs by comparing specific features of fingerprints referred to as the minutiae. These points are generally areas where ridge lines end or where one ridge splits into two. To get a match, the scanner system simply has to find a sufficient number of minute patterns that the two prints have in common.

 

You are probably aware that you have a computer and a monitor, the most-used output used with personal computers.

But how do these two components work together? This article will help you to understand the basics behind the answer to this question.

As you can likely imagine, when you type a letter on your keyboard and see it appear as a text graphic on your monitor’s display, this has occurred through the sending of signals across multiple aspects of your device. This signal can either be in analog or digital format.

If it’s in analog format, you likely are using a CRT or cathode ray tube display. Analog format implies the use of continuous electrical signals or waves to send information as opposed to 0s and 1s, which comprise digital signals.

Digital signals are much more common among computers and a computer and video adapter is often used to convert digital data into analog format for CRT displays. A video adapter is simply an expansion card or component that converts display information into an analog signal that can be sent to the monitor. It’s often called the graphics adapter, video card, or graphics card.

16 bitOnce the graphics card converts the digital information from your computer into analog form, that information travels through a VGA cable that connects to the back of the computer to an analog connector known as a D-Sub connector. These connectors tend to have 15 pins in three rows, each of which with their own uses. The connector has separate lines for red, blue and green color signals as well as other pins. Normal televisions just convert all of these pins into one composite video signal, but this is abnormal for a computer. In fact, the separation of all these signals in a computer monitor’s connector is responsible for the monitor’s superior resolution.

You can also use a DVI connection between your computer and display monitor. DVI stands for Digital Video Interface and was developed in the interest of foregoing the digital to analog conversion process. LCD monitors support DVI and work in a digital mode. Some can still accept analog information, but need to convert it into digital information before it can be displayed correctly.

bit colorOnce the appropriate signals are making it to your computer’s monitor, you’re ready to start thinking about color depth. The more colors your monitor can display, the brighter and more beautiful the picture (and the more expensive the equipment). To talk about what makes one display capable of creating more colors than another, it’s important to discuss bit depth.

The amount of bits used to describe a pixel is known as its bit depth. A display that 7operates in SVGA (Super VGA) can display a maximum of 16,777,216 colors because it can process a 24-bit-long description of a pixel. This 24-bit bit depth can be broken down into three groups of 8 bits. One group of bits is dedicated to each additive primary color: red, blue, and green. The 24-bit bit depth is known as true color because it can produce all 10,000,000 colors visible to the human eye.

There is even 32-bit bit depth. In this case, the extra eight bits are used in animation and video games to achieve effects like translucency.

…Quantum computers that is. They don’t quite exist yet (at least not at the level of practical use), but slowly mankind is working towards them. Here’s the story of why and how.

Actually, let’s preface this story with a quick overview of the switch that made modern computers possible; the switch, and amplifier, that’s known as a junction transistor. Before germanium-based (and later silicon-based) transistors, clunky, unreliable and energy inefficient vacuum tubes were used to close and manage circuits in televisions, radios, etc. Once transistors were invented in the late 1940’s, things started to change rapidly; suddenly transistors could be put in devices as small as hearing aides and pocket transistor radios became prevalent. Eventually people realized the transistor could be used in computers (making the 60,000 pound ENIAC the last of its breed), and they were so light, efficient and small that a whole world of more powerful computing opened up. Today, microprocessors are made with millions of transistors etched into their silicon wafers so that major computer processing can occur on handheld mobile devices.

Transistors have reigned for about 6 decades now… so what’s next? After all, according to Moore’s Law, the amount of transistors that can be fit onto a microprocessor should double approximately every two years. For this to remain true into the 2020’s and 2030’s, scientists are investigating the use of quantum computers, or computers whose processors and memory are managed at the level of atoms, ions, photons, and electrons (in this context they are called qubits). According to some scientific theories, the fact that these particles can exist in superposition (meaning that they would neither translate into a 0 or a 1) allows for a parallel processing power that could beat that of modern computers one million fold.

So how’s that coming? Well let’s follow the path of history:

los alamosIn 1998, Los Alamos and MIT researchers figured out how to spread a single qubit across three nuclear spins in each molecule of a liquid alanine or trichloroethylene solution. Researchers were able to use these solutions and the process of entanglement to figure out how to observe the qubit’s properties without corrupting it with the force of their attention.

In 2000, the scientists at Los Alamos hit it big again when they invented a 7-qubit computer that was contained within a single drop of liquid. This quantum computer used nuclear magnetic resonance (NMR) to manipulate particles in the atomic nuclei of molecules of trans-crotonic acid. The NMR was used for the application of electromagnetic pulses which forced the particles to all line up. These particles in positions parallel or counter to the magnetic field allowed the quantum computer to mimic the information encoding bits in digital computers.

stanfordIn 2001, researchers at Stanford University invented a quantum computer that could demonstrate Shor’s Algorithm (a method for finding the prime factors of numbers that plays a principal role in cryptography). The 7-qubit computer found the factors of 15.

Skipping forward to 2007, Canadian startup company D-Wave created a 16-qubit quantum computer that could solve a sudoku. D-wave’s most recent computing model the D-Wave 2X has over 1000 qubits, purportedly being able to find 2^1000 possible solutions simultaneously.

In the present scenario it is impossible to have a life without the computers. Computer is there in day to day life and they assist us in all aspects and they are required for all life applications. Computer is essential in matters of business dealings and in fact nothing seems workable in the business arena without computers. In fact, computer has made the best move in the business industry and due to the multi functional nature of the same the device seems so imperatively vital in life and living. Computers have gained the significance due to their ability to improve productivity and efficiency of an organization.

The importance of a PC

A PC or a computer can help in matters of business. The machine makes the staffs so efficient and productive. Computer help in saving time and this is the reason you can concentrate on other things at the same time. It plays a vital role both in the office and in business and it is needed to make calculations and co-ordinations both at office and at the work place. In school too computers are in much usage. In fact, the machine is needed to allow the students understand the basic concept. In this way you can better explain things to them.

The Essentiality of a Computer

In fact, these days computer has become a vital educational tool. This is the best to allow you understand the basic concept by means of videos and audio examples. Computers can even helps professors and researchers to have the best and the most effective performance in the field. With the help of the device the people can share the essential knowledge to the staffs and the other members and in the way a project is made successful with the help of the machine. In fact, computer is an indispensible object in the life of the people and it has the best functions to execute to make life so simple and operational.

The Role Played by the Machine

Computer has the perfect role to play in several sectors of life. The computer is used in the medical industry to help the doctors. It is used in spheres of banking, railway, electricity and telephone departments. Computer these days are used in the shopping malls and the presence of the machine makes transactions so easy and perfect. You can even find computers in the administrative genre and it is required for all private and public necessities in life.

The Smart Action of the PC

The PC has the best role to play in the media and entertainment industry. t is the best device one can make use of in order to make films and commercials. In fact, with the help of the computer the definition of entertainment has changed entirely. Computer helps business grow so fast. There is an unsurpassable hike in the economy and this is the reason one cannot deny the usable qualities of the same. In fact, this is the heightened technology to help the humans progress at the best with all surety and perfect technological assistance.

-->