K Computer and Exascale Computing: The New Wave

you brain on k computer

The vast distance between the processing power of the human brain and that of super computers is slowly shrinking. Researchers used K computer, a Japanese petascale computer, to simulate the equivalent of a single second of the brain activity. K computer took 40 minutes to accomplish the feat of simulating approximately 1 percent of the brain’s neural network.

With 705,024 processor cores and 1.4 million GB of RAM at its disposal, the K computer took 40 minutes to model the data in a project designed to test the ability of the supercomputer and gauge the limits of brain simulation.

While computing on this scale is extremely impressive, the abilities of supercomputers are still inadequate in comparison to the insane complexity of the human brain. K computer  is the fourth largest super computer in the world and costs about $10 million dollars to operate annually. The Japanese K computer consumes 12.7 megawatts per hour. According to Fujitsu that’s enough energy to power approximately 30,000 homes.

Related Article: Sweden is Running Out of Trash

The new japanese K computer

860 of these cabinets, working at near full capacity for forty minutes, is equal to 1% of your brains capacity in one second. K computer awe http://www.fujitsu.com/global/about/tech/k/whatis/project/

Your brain, on the other hand, needs only three meals a day, weighs only 3.086 lbs (1.4 kgs), and consumes only 20 watts of electricity. That is a third of the energy used to power your 60 watt light bulb! The goal of large scale computing has always been to create a computer with processing power comparable to that of the human brain. In order to achieve this feat, computer engineers will need to think of the universe as something that never had a box. From Brain-Like Chip May Solve Computers Big Problem: Energy by Douglas Fox:
It is impressive that our computers are so accurate—but that accuracy is a house of cards. A single transistor accidentally flipping can crash a computer or shift a decimal point in your bank account. Engineers ensure that the millions of transistors on a chip behave reliably by slamming them with high voltages—essentially, pumping up the difference between a 1 and a 0 so that random variations in voltage are less likely to make one look like the other. That is a big reason why computers are such power hogs.

The Neurogrid computer, developed by Kwabena Boahen of Stanford University, aims much smaller than the K computer and by virtue, much larger. While the traditional computer is strict and rigid, the Neurogrid computer is designed to accommodate for the organic nature of the brain. Instead of utilizing the efficient methods of other computer engineers, Boahen attempts to hone in on the organized chaos of the human brain.

Related Article: Electronic Brain Implants Increase Intelligence

The Neurogrid and other computational innovations such as the K computer are likely to usher in a new wave of high level processing where emphasis on power is transferred to an emphasis on the delicate balance between information, energy, and noise. Perhaps by blending the two schools of thought- Large, impressive and small, noisy– we will be able to power machines capable of processing on a similar plane as the human brain. Incorporating efficient practices will also aid in the removal of moral obstructions. Should we power this mega-rad pc or provide the homes of an entire suburb or small village with electricity? Well with efficiency-not to be confused with that dreaded bureaucracy- we may be able to do both!
The dreams of computer engineers are likely to come to fruition very, very soon.
If petascale computers like the K computer are capable of representing one per cent of the network of a human brain today, then we know that simulating the whole brain at the level of the individual nerve cell and its synapses will be possible with exascale computers – hopefully available within the next decade.
For a better understanding of the difference between mega, giga, peta, exa, etc. just remember that each one is 1000 times more than the last.  So, an exascale computer is 1,000 times more powerful than a petascale computer (like the K computer), 1,000,000 times more powerful than a terascale computer, and 1,000,000,000 (billion) times more powerful than a gigascale computer (the computers that you and I have).

Related Article: The Singularity is Nigh Upon Us: The Merging of Humans with Technology



Optogenetics: Your Brain Controlled by Light



Follow the light! http://www.extremetech.com Optogenetics rules

A relatively new field of study called optogenetics is affording scientists the ability to activate and deactivate individual neurons of the brain using only light. This light-switch method is paving the way to an exponentially brighter future in neuroscience.

In April 2013 President Obama announced that he would ask congress for $100 million in 2014 in what he called The Brain Research through Advancing Innovative Neurotechnologies (BRAIN) Initiative. The initiative seeks a more thorough understanding of the human brain. According to Obama, the goal is to “better understand how we think and how we learn and how we remember.” Optogenetics will undoubtedly play a vital role in attaining this goal.

Related Article: Electronic Brain Implants Increase Intelligence

While Obama’s goal seems straightforward, the brain is one of the most complex structures humans have ever come across. According to Francis Collins, director of the National Institutes of Health,

It’s an amazingly ambitious idea. To understand how the human brain works is about the most audacious scientific project you can imagine. It’s the most complicated structure in the known universe.

Until a few years ago, before the method of optogenetics had been created, scientists depended on fMRI technology to scan areas of the brain and observe which areas are most active. If an area of the brain was found to be inactive, the only way to activate it was by using a wire probe.  While the probe was invasive, it still gave scientists a powerful and effective tool, and allowed them to activate individual brain cells. But, what if we want to investigate and understand a single neuron, or a group of neurons, or different groups all at once?

There are approximately 84 billion neurons in the human brain. This number has always seemed daunting, especially with the relatively limited tools of the past. Optogenetics, however, throws this hurdle aside in a blaze of innovation.

Related Article: Doctors Communicate with Vegetable Through Brain Scans

The method of optogenetics involves using only light to activate neurons based on their genetic type. Optogenetics is non-invasive and can even be performed on freely moving animals while still retaining exceptional precision.

Elizabeth Hillman, a biomedical engineer at Columbia University, is very excited about the optogenetic breakthrough, explaining that,

[Through optogenetics] you can select that very specific genetic cell type, and you can tell that specific cell type to react when you shine light on it.

For example, in the video below, scientists selected a specific motor neuron in a mouse to be affected by light. Just by shining a blue light on its head, they tell the mouse to start running. When the blue light disappears, so does the mouse’s movement.

While the methodology is opening doors left and right, optogenetics does not come without faults. Because neurons don’t naturally respond to light, it is necessary to alter the gene with additional genetic material so that it reacts to the optogenetic process. Genetic engineering in humans isn’t exactly mainstream right now. We know that using viruses to alter genes in humans works very well, but there are obvious risks associated with that process.

Related Article: Controlling Dreams and Implanting Memories

Another issue is that the light cannot reach the deepest cells in the brain. Additionally, optogenetic precision still has much room for improvement. Despite these concerns, optogenetics is still heralded as one of the top ten scientific breakthroughs of the decade. One of the most recent breakthroughs in optogenetics was the ability to influence cell types in the pre-frontal cortex.

So, how will optogenetics personally influence your life when it is commercialized? Depression, epilepsy, Parkinson’s disease, Alzheimer’s disease, addiction, and even fear may one day be flipped off with a simple flash of light. Consequently, these illnesses could be activated with a flash of light as well.

The future is clear: multicolored laser pointers will become the new standard tool for doctors and soldiers alike.













Computer Chips Modeled After the Human Brain


I dare you to look at contemporary computer chips and not admire their abilities.  The most impressive example may be the realized dream of hand-sized smart-phones, pieces of technology we already tend to take for granted. And yet – with all their condensed might packed into a few square centimetres, those chips are nearing their developmental boundaries.

Try to open your computer case and have a look. Ignore the dust! See all those messy cables inside? Modern computer architecture is crippled by the fact that data has to flow between the different parts of the computer: The CPU (central processing unit), hard-drive, the RAM, the video card, etc. (namely – those green cards that you see inside the computer case). Although tremendous efforts have been made to accelerate those transitions, the data flow between those parts still poses a serious bottleneck on the performance of computers since software commands have to be executed sequentially.

Related Article: Electronic Brain Implants Increase Intelligence

A new study from Boise State University suggests a better solution to the problem: computer chip architecture modeled after the human brain. Instead of a central processing unit overwhelmed by data flow frComputer_Chipom different computer parts, the new architecture will be based on the way the human brain functions. Multiple areas – each one processing it’s own part, contribute together to create the bigger picture. This kind of architecture eliminates the need for the major processing and memory units. Instead of a hard-drive, the RAM, the video-card and most probably the CPU itself, a new kind of universal electronic chip will process and store the data on its own.

According to the principal investigator of the research grant, Elisa Barney Smith,

By mimicking the brain’s billions of interconnections and pattern recognition capabilities, we may ultimately introduce a new paradigm in speed and power, and potentially enable systems that include the ability to learn, adapt and respond to their environment.

Related Article: Newcortex: How Human Memory Works and How We Learn

090713-memristors-01The neural approach is now becoming practical thanks to the on-going development of a new type of resistor: the memristor. Memristors can be tweaked to new resistance levels by applying and removing electric currents. Memristors “remember” the last resistance applied to them even after the power is removed. In simple words – a storage effect appears. An idea first conceived in 1971, for many years memristors puzzled physicist and engineers as a theoretical missing link component until recent developments finally made them practical. Although not yet commercially used, memristors are already taking active parts in research.

Dexter Johnson from The Nanoclast goes into greater detail regarding memristors:

The memristor has been on a rapid development track ever since and has been promised to be commercially available as early as 2014, enabling 10 times greater embedded memory for mobile devices than currently available.

The obsolescence of flash memory at the hands of the latest nanotechnology has been predicted for longer than the commercial introduction of the memristor. But just at the moment it appears it’s going to reach its limits in storage capacity along comes a new way to push its capabilities to new heights, sometimes thanks to a nanomaterial like graphene.

Using memristors, the team hopes to apply algorithms inspired by the interaction between the neural synapses of the human brain. The effect should follow the intricate patterns our brain implements to process and store data.

Related Article: Of Cyborg Monkeys and New Hope for Amputees

Apart from sounding super-cool (in a geek-ish way), this new approach harbors multiple advantages. First – a tremendously increased processing power. Thanks to mother nature (or depending on what you believe), our brain proves to be quite efficient in processing data. The new generation of computers will benefit from that very same system. Second – the new chips will be considerably more power efficient, suggesting they may be used in places where power support is an issue. We may expect an additional decrease in electronic-chip sizes as well.

And lastly… did I already mention that this new architecture sounds super cool?