This Tuesday, Microsoft will unveil its Windows 10 Anniversary Update. Most Windows users have been forced to upgrade to Windows 10 by now, but some of them, like this reporter, keep fighting for the only phase of the software compatible with their refurbished laptop. windows 10

The free Windows 10 update includes two major security innovations sure to improve the security of the operating system: Windows Hello, engineered for apps and websites, and the more all-encompassing Windows Defender.

Enterprises working under Microsoft operating systems will receive Windows Advanced Threat Protection, which is able to detect, investigate and respond to even advanced malicious attacks on networks. The improvement comes along with Windows Information Protection, or WIP, known previously as “enterprise data protection.” Both new software updates have been made with recent hacking epidemics in mind; businesses large and small are on the look out for ways to protect themselves from malware more than ever, especially given the rampage that ransomware has carried out on a variety of organizations.

According to Laura DiDio, research director at Strategy Analytics, Microsoft “has done a very creditable and admirable job of paying attention to security- secure by design, secure by default, secure in implementation, and secure in storage… Now, they’re making it much more usable.”update

The Anniversary Update will also extend the Windows Hello biometric authentication feature in the browser using FIDO and make it possible for users to access apps like Dropbox password-free. This will come along with new password-syncing features that make Cortana apps accesible on iOS, Android, and Windows 10 Mobile.

Those valuing security and easy access won’t be the only ones satisfied with the Windows Anniverary update; a host of other programs are being improved. These include Windows Ink, which will allow users to complete tasks with new tools like a digital pen with ink-specific features. Cortana will appear above the lockscreen, making her accessible without unlocking whatever device. Microsoft Edge will come with an updated browser that has more power-saving improvements as well as helpful additions like Edge Extensions “Pin It” button and AdBlock. Gamers will be able to use Cortana commands for Xbox One and select any supported language regardless of their location.

Finally, the new Windows Anniversary update will make it possible for users that are teachers to simplify the methodology behind their PC deployment. Teachers can set up multiple devices without required dedicated IT support due to simplified, step-by-step instructions now available using Cortana. Schools that have IT support will be able to set up shared devices in bulk by rapidly using the updated Windows Imaging and Configuration Designer tool. Even this tool will be used easily by first-timers using Google and other how-to information sources.

All in all, the update promises to make Windows a usable operating system with a friendlier, more common-sense interface that anyone can feel comfortable with, an area where they’ve always found stiff competition with Apple. 

Ontop of that, users will enjoy increased security and improved tools for a wide spectrum of tasks. The Anniversary Update will likely be enjoyed by anyone who wasn’t forced to upgrade to Windows 10 before realizing that their computer wouldn’t work with it and then having to pirate Windows 8 again, which should have been free anyway. top

Last week, Google revealed that it would be experimenting with post-quantum cryptography in its browser, Google Chrome. The experiment will allow for a small proportion of connections between Google’s servers and Chrome on the desktop to use a post-quantum key-exchange algorithm as well as the elliptic-curve key-exchange algorithm that is already in wide use.

qcThe fundamental concept driving the experiment is that large quantum computers, which threaten to encompass a total revolution in computing history, may be able to break currently used security algorithms. The tech mogul’s philosophy is to be ready for these hacking attempts before quantum computers are built or propagated widely.

Google’s experiment utilizes an algorithm called New Hope. Google considers it the post-quantum key-exchange with the most potential after having investigated a variety of options over the past year. Google hopes to allow for its best engineers to gain real-world experience with the larger data structures that will likely be required given that post-quantum algorithms become more widespread.

According to Google, the company’s decision to layer the post-quantum algorithm over the existing algorithm will allow for the company to conduct its experiment without affecting its users’ security. The company also pledged that it would stop its experiment after collecting information for two years as it does not intend to make its post-quantum algorithm the standard.

“Google’s investigating the quantum computing resistance of New Hope for a robust key exchange algorithm,” explained VP of product at Rubicon Labs Rod Schultz. While the company’s announcement “doesn’t herald anything new,” “it goes further to confirm that quantum computing-resistant algorithms will provide significant competitive advantage for anyone who has the IP for them.”

“You can view this investigation as [one] in Google’s core competency,” Schultz continued,” and also as a hedge and insurance policy around the catastrophic impact to encryption that quantum computing is predicted to have.”

Rob Enderle, principal analyst at the Enderle Group, as usual chimed in on the phenomenon:

“I doubt that we can develop a defense that works before we actually have quantum computers, because there’s no way to actually test something against a platform that doesn’t exist… Still, this approach could be better than existing methods, making it worthwhile to attempt.”

Jim McGregor, principal analyst at Tirias Research, stated that “Cybercriminals and government-sponsored organizations are looking at this technology too.”

“No one in the industry believes that any software solution is unbreakable,” he concluded.

titanPost-quantum cryptography has been of interest to cryptographers for years. In fact, the seventh annual international conference for post-quantum cryptography took place in Fukuoka, Japan just a few months ago. The United States’ NSA has published information on the subject, and the United States National Institute of Standards and Technology published a report on post-quantum cryptography just last spring. Along with the report, the agency stated that it would be in open collaboration with the public to develop and vet post-quantum crypto algorithms.

“Gaining access to powerful computing resources is not difficult anymore,” stated Schultz. “The bigger challenge will be in updating the current technology that’s prolific today with QC-resistant technology. It will only take a single quantum computer in the hands of the wrong person to destroy the foundation of encryption today.”

Apple Inc. may currently stand among the most powerful tech moguls of Silicon Valley, but that they’re not the only big dog on block. Plenty their competitors have been barking up the right trees for years and they rightly have a bone to pick with Steve Jobs’ brainchild.

apple3Nothing is a clearer indicator of Apple’s vulnerable status at the precipice of failure than the company’s decision to put out a second generation of the operating system run on Apple Watch. The Apple Watch ranks among the dumbest accessories ever made for health-conscious 30-year-olds, on par with the little harnesses on your arm that make it possible to run with an iPod. Some industry analysts have alleged that Steve Jobs may be haunting Tim Cook, perhaps forcing him to move forward with an idea stolen from Star Trek.

The Apple Watch is useless for a variety of reasons. The first and perhaps most important reason is that it cannot be used without the help of a better device that does all of the same things but on a larger and more accessible screen. The Apple Watch also is made to be worn on a part of your body that gets wet every time you wash your hands, a “big no-no for computers” as described by industry expert Jackie Robinson. Finally, the Apple Watch pretends like you can send texts on it but it’s basically a beeper and can only be written on or used with a stylus, which has the sex appeal of wearing transition glasses, even in tech circles. The end product is an overly small computer that shames all of the previous personal computers that came before it. 

Industry futurist Jake Guarino has suggested that Apple may be creating more wearable tech for only people with Eagle eyes to read off of:

apple4“I wouldn’t be surprised to see a patent filed for an Apple ring, or perhaps even an Apple naval stud that can also act as a flashlight,” offered Guarino over a cup of yerba matte. “Whether the naval stud will have a flashlight app is up for debate.”

Apple also recently embarrassed itself when CEO Tim Cook publically refused to cooperate with the federal government, seeming to play the hero by taking a firm stance regarding his clients’ privacy. Unfortunately for Cook, the government simply side stepped his efforts to secure Apple encryption and proved that it could hack into iPhones whenever it wanted. Cook’s argument was boiled down to an effort to sanctify his brand at the end of the day, and no one cares about what was once an uproar.

Apple recently suffered its first down quarter since 2001 and has been the loser of a variety of patent cases in China in the past year or so. It’s unlikely that Apple will be able to sell its iPhone 6 or iPhone 6S models in Beijing now that a recent ruling has been made regarding the phone’s illegal similarity to an existing company’s product. Will there be a downfall for this tech king, and if so when? Who’s to know, but much will likely become clear when the tech bubble bursts.

Last Monday, OnePlus announced its 30,000 virtual reality headset giveaway. Internet users jumped on the VR company’s generous offer, and the headsets ran out well before the day ended.

vr headsetOnePlus has been developing a VR space that it calls “The Loop” in which the company plans to unveil the OnePlus 3. The Loop constitutes one of many new and emerging technological worlds in the greater VR industry. The Loop is only accessible to those with OnePlus Loop VR headsets, which are produced by AntVR.

“We believe that we, and the tech industry as a whole, have only scratched the surface of what can be done in VR,” stated OnePlus cofounder Carl Pei.

The only issue with VR is the difficulty in getting the entire industry up and running; users have to spend a lot of money upfront for the technology to access a niche product that’s difficult to test for quality before the purchase.

That’s why OnePlus’s decision to give out its headsets for free may be well worth the money; with 30,000 headsets in circulation, they’re generating their own market of VR consumers. According to Jim McGregor, the decision on OnePlus’s part “is an innovative way to generate press and consumer interest in the event and the new smartphone.”

“It is really difficult to stand out from the smartphone crowd in today’s market,” McGregor continues. “So this could help reach a broader audience and generate brand awareness.”

This will be the second of two smart phones that OnePlus has launched through the VR medium. Last year it gave away 30,000 OnePlus Cardboard viewers to enable the launch of OnePlus. Unfortunately for the burgeoning VR company, that launch didn’t go so well.

“Hoping this wouldn’t be much of a disaster like the OP2 VR launch,” stated one user in response to OnePlus’s announcement of its second try. Others are already disillusioned with the company, believing that the second launch is as doomed to fail as the first was.

oneplus cardboard“Forget cardboard, this year we’re excited to bring you the OnePlus Loop VR Headset- for a more robust, immersive and comfortable experience,” Pei described. He went on to say that Loop VR “is beautiful, and developed together with our buddies at AntVR… We’re confident that this year’s experience will be vastly improved.”

Larry Chiagouris, a professor of marketing at Pace University, has his own opinions about the launch:

“This would be a very expensive launch for most companies, but OnePlus at the moment is an unknown brand to most consumers,” he stated. If OnePlus can put itself in a position where it is known as the consumer electronics brand that’s most in alignment with VR applications, it “will be seen as a brilliant move.”

That said, if the decision flops along with the VR industry in general, the decision “could be seen as a very big mistake because OnePlus could have put far more of its mobile devices in the hands of consumers by simply discounting them.”

After all, OnePlus “likely could have put more than 100,000 devices in the hands of consumers with big discounts, and the related word of mouth would have been susbtantial,” he concluded.

According to U.S. FCC Commissioner Michael O’Rielly, the government may need to investigate Netflix’s practice of throttling video content delivery to customers using mobile devices.

That said, O’Reilly was quick to bring light to the fact that Netflix’s video throttling was not a violation of the FEC’s Net neutrality rules. Neflix fecently announced plans to offer a data saver feature for mobile apps beginning in May.

netflix throttlingnetflix throttlingnetflix throttlingnetNetflix has made a clear stand as a proponent of Net neutrality and admitted that it secretly throttled back the speed of its video customers of Verizon and AT&T without actually disclosing its policy to the mobile carriers used by its own customers, according to The Wall Street Journal. The news surfaced after T-Mobile CEO John Legere accused the two rival carriers of throttling back their speeds, not knowing that Netflix was actually responsible for the throttling.

Netflix has generally poised itself as a company against restrictive data caps, as it considers such caps as negatively affecting consumers and the internet’s development in general. However, it did set a default rate at 600 kilobits per second as a way to strike a tentative balance between its consumers’ quality video experience and the potentially excessive charges from mobile carriers for its customers.

Spokesperson Anne Marie Squeo elaborated on Netflix’s perspective, which asserts that customers don’t actually need the same resolution on their phones as they do on large-screen televisions or computers:

“However,” she stated, “we recognize some members may be less sensitive to data caps or subscribe to mobile data plans from carriers that don’t levy penalties for exceeding caps.”

The American Cable Association last week asked that the Federal Communications Commission launch some kind of investigation into the practices of edge providers.

“ACA has said all along that the Federal Communications Commission’s appraoch to Net neutrality is horribly one-sided and unfair because it leaves consumers unprotected from the actions of edge providers that block and throttle lawful traffic,” explained ACA President Matthew Polka.

net“While we’re disappointed to hear that Netflix has been throttling its videos for AT&T and Verizon customers, I think it’s important to realize that this wasn’t a violation of Net neutrality, since it was the edge provider itself who made the decision to throttle its own traffic,” stated Jeremy Gillula, staff technologist at the Electronic Frontier Foundation. Gillula believed that Netflix had a responsibility to disclose its throttle policy earlier and more transparently, adding that all companies should have to be straight-forward with their customers.

Others believe that people who take issue with Netflix’s throttling are actually blowing the issue out of proportion, given that the real threat involves the fact that Internet service providers are coming between a provider like Netflix and its customers.

Christopher Mitchell sees the issue this way, stating that “In this case, Netflix is making choices regarding its own customers and is not impacting any other business. So I was not upset or worried learning that Netflix is going this,” he concluded. Mitchell is the director of the Community Broadband Networks Initiative at the Institute for Local Self-Reliance.

On this most recent Tuesday Microsoft announced the plans to unveil a version of its dynamic database production SQL Server 2016 for Linux which should be an important innovation in the history of computers.

Scott Guthrie, executive vice president for Microsoft’s cloud and enterprise group said this week that “Bringing SQL Server to Linux is another way we are making our products and new innovations more accessible to a broader set of users and meeting them where they are,” not to mention that this will be able to relieved a consistent data platform with access across through and yes you guess it over the SQL Server and Linux. also on premises he noted. An insider goes on to note that.” Customers will be able to build and deploy more of their applications on a single data management and business analythmmic platform,” that person is Jennifer Reynolds.

Customers also can be allowed to leverage their existing tools talented and resourceful for more of throe applications the company has long since noted. one major comment could consider that With Microsoft bringing SQL Server to Linux, enterprises will be able to further integrate disparate platforms to deliver on the promise of the hybrid cloud, while increasing the choice that developers, customers and partners have as open source continues to form the foundation of the platforms of the future It’s about capturing opportunities on Linux servers that Microsoft today doesn’t have any offerings for,” said Mike Ferris, Red Hats director of business architecture.

A fair number of Microsoft shops aren’t pure Microsoft anymore, so increasingly companies that are deploying Linux in their infrastructure have had to look for a mixed database environment. Microsoft is trying to solve that for them,” Gold said.

This could amount to what some could refer to as a coup for Linux on the market and very much the technology and superiority of the field at hand when we consider what this announcement will mean for 2 months form now but 2 years from now when the vagueness and ambiguity of the actions we put forth today will be felt and we will know if this was some kind of fun experiment wherein men shot in the dark at a reality for tomorrow or will be we there and know we did good.

One person has said  on this potential coup that “There was data1a large internal battle over whether applications should be decoupled from Windows. Now they realize they have to be more flexible in a changing environment,” he told Linux insider.

This may very well be the case but we will see that we don’t really know and should also consider what IDC’s gillen noted It gives Linux even more credibility than it already has,”If Microsoft is convinced that Linux is a platform that needs to be supported, what does that say about Linux? It says it’s a respected and powerful platform.” this is an interesting take on the issue at hand and may prove to be more efficient for the research in time spent as well as output and function.



Like quantum computing, the IoT (Internet of Things) is drastically changing the way that people view and interact with computers. But what is it?

“The Internet of Things” became a tech buzzphrase when Kevin Ashton (cofounder of MIT’s Auto ID Center) first mentioned it in a presentation he made to Procter & Gamble, way the heck back in 1999. One decade later, Ashton elaborates on the concept in an article he wrote for the Radio Frequency Identification (RFID) Journal:

IoT“Today’s computers- and, therefore, the internet- are almost wholly dependent on human beings for information. Nearly all of the roughly 50 petabytes of data available on the Internet were first captured and created by human beings- by typing, pressing a record button, taking a digital picture of scanning a bar code. Conventional diagrams of the Internet include servers and routers and so on, but they leave out the most numerous and important routers of all: people. The problem is, people have limited time, attention and accuracy- all of which mean they are not very good at capturing data about things in the real world.”

“If we had computers that knew everything there was to know about things- using data they gathered without any help from us- we could be able to track and count everything, and greatly reduce waste, loss and cost,” he continued. “We need to empower computers with their own means of gathering information, so they can see, hear and smell the world for themselves, in all its random glory. RFID and sensor technology enable computers to observe, identify, and understand the world- without the limitations of human-entered data.”

Let’s back up for a second. For the record, a member of the Internet of Things can be a lot of different kinds of “things;” a person, an animal, a vehicle, man-made things, non-man-made things, anything that has been assigned an IP address and anything provided with the ability to transfer data over a network.

IoT2The Fitbit is an excellent example. Among other things, the Fitbit is a pedometer that tracks the amount of steps taken by wearers. That information is then  sent to the user’s Fitbit account, so that the user is enabled to track the changes of his or her daily movement. The Fitbit therefore occupies a space in the Internet of Things, chiefly because it transfers data, over a network, to be accessed by other devices.

Ashton believes that products like the Fitbit scrape only the tip of the Internet of Things iceberg: “It’s not just a ‘bar code on steroids’ or a way to speed up toll roads, and we must never allow our vision to shrink to that scale. The Internet of Things has the potential to change the world, just as the Internet did. Maybe even more so.”

That said, the Internet of Things has already come a long way from its humble beginnings as a 1980s coke machine at Carnegie Mellon University. 

Anyone with the latest iPhone or Android knows that fingerprint scanning has officially hit the mainstream. But how does that process work, and how accurate can it really be? Here’s a closer look at fingerprint scanning and how it works.

fingerprint scannerFingerprint scanning falls under the umbrella of biometrics, the measure of your physical form and/or behavioral habits, generally for the sake of identifying you before you are granted privileged access to something. Other examples of biometrics include handwriting, voiceprints, facial recognition, and hand structure scanning.

It’s said that humans have tiny ridges and valleys all along the inside surface of their hands for the sake of friction; our fingerprints are meant to act as treads that allow for us to climb and enjoy an improved grip on the things we carry. Who really knows though. Regardless, we have fingerprints, and they happen to be different for each of us due to both genetic and environmental factors.

That’s extremely useful for security and law enforcement in general. With a fingerprint scanner, you can know if anyone whose fingerprints are on record touched a particular object. Finger print scanners can get an image of someone’s finger in many ways, but the two most common methods are optical scanning and capacitance scanning.

Optical scanners use a charged coupled device (CCD), which is the same light sensor system commonly found in digital cameras and camcorders. A CCD is just a collection of light-sensitive diodes called photosites that receive light photons and generate an electrical signal in response. When you place your finger on the glass plate of a fingerprint scanner, the scanner’s light source illuminates the ridges of your finger and the CCD generates an inverted picture of your fingerprint in which the ridges are lighter and the valleys are darker. So long as the image is sufficiently bright and crisp, the scanner will then proceed to compare the print to other prints on file.

capcitive fingerprint scanningCapacitive fingerprint scanners function slightly differently but create the same output. They use electrical current to sense the print instead of light, so they’re built with one or more semiconductor chips containing and array of cells which are each made up of two conductor plates covered with an insulating layer. A capacitor is formed out of these plates, plus the surface of the finger acts as the third capacitor plate. Basically, the scanner reads how the voltage outputs coming from the finger are different due to the difference in distance from the valleys and ridges to the capacitors and generates from this an image of a fingerprint. These systems are apparently harder to trick and can be built to be more compact.

Once the fingerprint registers, it must be analyzed to see if it matches with any other prints recorded in the system. This occurs by comparing specific features of fingerprints referred to as the minutiae. These points are generally areas where ridge lines end or where one ridge splits into two. To get a match, the scanner system simply has to find a sufficient number of minute patterns that the two prints have in common.


You are probably aware that you have a computer and a monitor, the most-used output used with personal computers.

But how do these two components work together? This article will help you to understand the basics behind the answer to this question.

As you can likely imagine, when you type a letter on your keyboard and see it appear as a text graphic on your monitor’s display, this has occurred through the sending of signals across multiple aspects of your device. This signal can either be in analog or digital format.

If it’s in analog format, you likely are using a CRT or cathode ray tube display. Analog format implies the use of continuous electrical signals or waves to send information as opposed to 0s and 1s, which comprise digital signals.

Digital signals are much more common among computers and a computer and video adapter is often used to convert digital data into analog format for CRT displays. A video adapter is simply an expansion card or component that converts display information into an analog signal that can be sent to the monitor. It’s often called the graphics adapter, video card, or graphics card.

16 bitOnce the graphics card converts the digital information from your computer into analog form, that information travels through a VGA cable that connects to the back of the computer to an analog connector known as a D-Sub connector. These connectors tend to have 15 pins in three rows, each of which with their own uses. The connector has separate lines for red, blue and green color signals as well as other pins. Normal televisions just convert all of these pins into one composite video signal, but this is abnormal for a computer. In fact, the separation of all these signals in a computer monitor’s connector is responsible for the monitor’s superior resolution.

You can also use a DVI connection between your computer and display monitor. DVI stands for Digital Video Interface and was developed in the interest of foregoing the digital to analog conversion process. LCD monitors support DVI and work in a digital mode. Some can still accept analog information, but need to convert it into digital information before it can be displayed correctly.

bit colorOnce the appropriate signals are making it to your computer’s monitor, you’re ready to start thinking about color depth. The more colors your monitor can display, the brighter and more beautiful the picture (and the more expensive the equipment). To talk about what makes one display capable of creating more colors than another, it’s important to discuss bit depth.

The amount of bits used to describe a pixel is known as its bit depth. A display that 7operates in SVGA (Super VGA) can display a maximum of 16,777,216 colors because it can process a 24-bit-long description of a pixel. This 24-bit bit depth can be broken down into three groups of 8 bits. One group of bits is dedicated to each additive primary color: red, blue, and green. The 24-bit bit depth is known as true color because it can produce all 10,000,000 colors visible to the human eye.

There is even 32-bit bit depth. In this case, the extra eight bits are used in animation and video games to achieve effects like translucency.

…Quantum computers that is. They don’t quite exist yet (at least not at the level of practical use), but slowly mankind is working towards them. Here’s the story of why and how.

Actually, let’s preface this story with a quick overview of the switch that made modern computers possible; the switch, and amplifier, that’s known as a junction transistor. Before germanium-based (and later silicon-based) transistors, clunky, unreliable and energy inefficient vacuum tubes were used to close and manage circuits in televisions, radios, etc. Once transistors were invented in the late 1940’s, things started to change rapidly; suddenly transistors could be put in devices as small as hearing aides and pocket transistor radios became prevalent. Eventually people realized the transistor could be used in computers (making the 60,000 pound ENIAC the last of its breed), and they were so light, efficient and small that a whole world of more powerful computing opened up. Today, microprocessors are made with millions of transistors etched into their silicon wafers so that major computer processing can occur on handheld mobile devices.

Transistors have reigned for about 6 decades now… so what’s next? After all, according to Moore’s Law, the amount of transistors that can be fit onto a microprocessor should double approximately every two years. For this to remain true into the 2020’s and 2030’s, scientists are investigating the use of quantum computers, or computers whose processors and memory are managed at the level of atoms, ions, photons, and electrons (in this context they are called qubits). According to some scientific theories, the fact that these particles can exist in superposition (meaning that they would neither translate into a 0 or a 1) allows for a parallel processing power that could beat that of modern computers one million fold.

So how’s that coming? Well let’s follow the path of history:

los alamosIn 1998, Los Alamos and MIT researchers figured out how to spread a single qubit across three nuclear spins in each molecule of a liquid alanine or trichloroethylene solution. Researchers were able to use these solutions and the process of entanglement to figure out how to observe the qubit’s properties without corrupting it with the force of their attention.

In 2000, the scientists at Los Alamos hit it big again when they invented a 7-qubit computer that was contained within a single drop of liquid. This quantum computer used nuclear magnetic resonance (NMR) to manipulate particles in the atomic nuclei of molecules of trans-crotonic acid. The NMR was used for the application of electromagnetic pulses which forced the particles to all line up. These particles in positions parallel or counter to the magnetic field allowed the quantum computer to mimic the information encoding bits in digital computers.

stanfordIn 2001, researchers at Stanford University invented a quantum computer that could demonstrate Shor’s Algorithm (a method for finding the prime factors of numbers that plays a principal role in cryptography). The 7-qubit computer found the factors of 15.

Skipping forward to 2007, Canadian startup company D-Wave created a 16-qubit quantum computer that could solve a sudoku. D-wave’s most recent computing model the D-Wave 2X has over 1000 qubits, purportedly being able to find 2^1000 possible solutions simultaneously.