Mystery Energy
Stories similar to this that you might like too.
The world’s first self-aware computer program was a piece of code written by the Princeton Artificial Intelligence Lab in 1968. It used several thousand lines to simulate a virtual spider, using a simple algorithm for calculating its own movement and then looking at its environment for threats. This is how all AI programs are started; it’s a technique called an evolutionary algorithm that uses random mutations and natural selection to find algorithms that work. A neural network can be thought of as an individual neuron. The neurons form connections with other neurons based on their location and functionality. An image recognition neural network is made up of many neurons that map out images to memory locations in the brain.
Computers have been getting smarter ever since—and so has our understanding of what makes them smart. One key part is the ability for computers to learn from experience: they can use trial and error to discover new things. In fact, today’s computers are essentially little brains with hundreds or thousands of processors that act like billions of neurons in a human brain. Some of these processors contain memories, while others contain logic circuits. And some even contain sensors that let your computer know where it is in space.
Today we’re going to look at the hardware inside your computer and see how you can make better use of it. Let’s start with a basic processor and memory chip. These days most computers come with two main chips: an integrated circuit, or IC, and a separate memory device such as a hard drive or flash storage card.
ICs are simple devices. They don’t do any logical calculations, but instead take input information and then output one bit of data at a time. That means that if you were to feed a single bit into one of those ICs, the result would be a 1 followed by a 0, and then another bit of data. If you fed a larger number of bits at once, the machine could add multiple numbers together, or multiply two numbers and spit out a third. But this is just for reading data, not manipulating it. If you wanted to move a few numbers around, you’d need to use a different type of device, such as a microprocessor.
A CPU (central processing unit) is an electronic computer designed to perform mathematical and logical operations on data in order to create results. Today’s CPUs are pretty powerful machines compared to their ancestors, and they work faster than ever before. So much so that we often forget what we used to do without them. For example, if we took a movie camera, added a GPS receiver, a microphone, some speakers, and a full screen TV monitor, that’s about what we had back when I was a kid. We’d call that a computer nowadays.
CPUs are built on silicon, the same material used to make semiconductor chips. Silicon has been around since the 1950s, but it wasn’t until the 1990s that it became cheap enough to produce for consumer electronics. Today, every computer comes with a large array of interconnected silicon chips that represent millions of transistors, each controlled by an electrical current. Each of those chips contains a processor and memory, which stores the instructions and data needed to run your computer.
For years, PCs worked by running programs stored on a small amount of memory. Then came the era of hard drives, where whole movies were downloaded on floppy disks. Hard drives are good, but they don’t have much capacity—you couldn’t store more than a few hours of video. Flash storage cards evolved from there, and now you can buy a 128GB thumb drive for $30.
Flash memory lets us load entire operating systems onto tiny devices, and it’s fast too: 40 times faster than a hard drive. And because it can operate independently of the power supply, it’s ideal for mobile phones and cameras. Because of this, the idea of having a cell phone with a screen bigger than a postage stamp has become reality.
Many people think of flash memory as being nonvolatile, meaning it doesn’t lose data when turned off. This is true, but it’s also very misleading. You might think that if you turn your flash drive off, it’ll stay on, waiting for you to switch it on again. This isn’t usually the case—some flash memory cells will remain powered long after the drive has been disconnected.
So flash memory is important, but why did we need it? Well, the problem was that your old PC or laptop had limited space on its hard drive. Even if you bought one with lots of disk space, it still ran slowly compared to newer models. The reason for this is that hard drives are really slow: it takes time to read a block of data, and then write it back again. A typical hard drive stores many hundreds of gigabytes on a single large piece of rotating metal.
The alternative to a hard drive was something called a Solid State Drive (SSD). An SSD uses a solid state disk instead of a hard drive. That means the underlying technology is different. Instead of storing data on a spinning platter, an SSD stores data on a series of tiny nanometer-sized particles arranged in layers. These can be written to and rewritten hundreds of times faster than a hard drive.
One other thing we need to remember is that all our data travels through wires, which is what makes it slower than it could be. What if we could eliminate those cables? One solution is wireless transmission via Wi-Fi or Bluetooth. But even though Wi-Fi is fast, using it constantly adds up. Why bother when you can get an SSD that’s almost as fast as wired storage? It’s because of the speed of your Internet connection.
Wi-Fi is slow, and the antennae used to send out your signal can degrade over time. Sometimes there are dead spots in a room; the farther you go from your router, the weaker the signals. Your ISP may have to use repeaters to boost the signal further, or it might be possible to connect wirelessly to another nearby router. But these options only work with small amounts of data, and the cost is high. In most cases, you’re better off paying for a faster service.
This is where fiber optics comes in. Fiber optic cables use glass tubes to carry light, and their speed is phenomenal. They’re less prone to interference than Wi-Fi, and so they’re ideal for sending data at high speeds to your house.
Fiber optics are not perfect: they’re expensive to install, and they’re vulnerable to damage. But they’ve been used to build new networks everywhere, including places like Tokyo and San Francisco. When installed, fiber can provide a great way to upload and download files to and from your home computer.
In the future, computers will be smaller, more capable, and faster than ever before. We’ll be able to do things that are impossible today. Yet there’s one thing that won’t change: the basic architecture of what goes inside them. All our computers will remain based on silicon, just like those first integrated circuits.
Even if we could find a way to invent a completely new material, that wouldn’t revolutionize the computer industry. Computers are made by following a process that’s been refined over decades: take a block of silicon, heat it into liquid form, and pour it into a mold. That’s how the world’s biggest computer makers produce their products.
But even though the process is the same, the machines that make modern computers are vastly more advanced than anything that went before. For instance, IBM’s Watson supercomputer is a cluster of 20,000 processor cores working together. At its core is a 32-core mainframe with 1.5TB of RAM. These components sit in a cylindrical chamber the size of a refrigerator, but it contains millions of dollars worth of hardware.
IBM built Watson to help doctors diagnose diseases, and the machine beat two human champions on Jeopardy. The original Jeopardy game show featured questions about history, literature, art, music, science, sports, and current events. The question was always framed around the category “General Knowledge.” To answer, players had to use logic, reasoning, and the ability to know when information wasn’t relevant.
Watson has an enormous database of facts, and it uses sophisticated algorithms to evaluate the questions and decide which facts are most likely to be correct. You might think that artificial intelligence would be used to play games, but Watson is being used to analyze medical records. This is useful in a number of ways, such as helping doctors recommend treatment options for patients.
It seems unlikely that we’ll see similar advances in the next few years. There are no major breakthroughs in sight. More likely we’re seeing a plateau after the massive performance gains we saw in the last decade. So although some people talk about the end of Moore’s Law, we’re actually seeing the beginning of its replacement by other technologies.
That doesn’t mean we’re done tinkering with the fundamental structure of the computer, however. With each generation, computers become more powerful and more efficient. Just look at the latest developments in nanotechnology, which are enabling devices to be smaller, cheaper, and easier to manufacture. Nanotech promises to enable us to pack more computing power into smaller packages, and this will lead to even greater advances in computers.
We also have to consider the impact of quantum computing. Quantum computers are different from conventional computers in several important ways. A classical computer works by breaking data down into simple binary bits — either 0 or 1. It then takes the data and processes it using a fixed set of rules.
Quantum computers don’t operate in such simplistic terms. Instead, they use quantum phenomena to manipulate electrons in atoms. They allow the manipulation of individual qubits instead of merely whole ones. This allows them to perform multiple calculations simultaneously. As a result, quantum computers may be able to solve problems that are too complex for conventional computers.
Although there’s currently little practical application for this technology, we should expect to see the development of quantum computers within the next few decades.
The End