The race of the next 10 years
Abstract
This article represents my personal opinion about the technological steps in the next 10 years from now on.
I believe that the next 10 years will bring dramatic changes to our lives. There are numerous ground breaking technologies on the march all of them are based on the dramatic increase of computing power in the last two decades.
The technologies of disruption are:
- Cloud Computing
- Bio Technology with CRISPR/Cas 9
- Machine Learning
- Deep Learning
- Artificial Intelligence
Disrupted field in human live will be:
- our live span
- daily work
- transportation
- assets / wealth
- … and a lot more
Those changes will have such a dramatic impact on our social and personal live, so we feel that we need to prepare for this foreseeable future.
Scepticism
My father-in-law grown in eastern Germany under a communistic regime in the german democratic republic. He lately, state that he does not believe in next-gen cars transportation and so on. He was disappointed by his own experiences since his childhood. There was this book called “Weltall Erde Mensch” (Wikipedia link) and the future predictions there have not been true and so he does not believe in any future predictions as well. In this discussion I found that all those changes I saw in the last 20 years are incredible fast, but still do not satisfy futuristic ideas of flying cars and living on the mars. From his perspective humanity failed.
Why now?
The momentum is now, because of the drastic improvement of the computational power. We have the ability to calculate the human genome in a very short time and to run very sophisticated neuronal networks on standard desktop PCs and mainframes including GPU power can simulate unbelievable sized networks and also can process a wast amount of training data. Furthermore computers nowadays have to ability to collect all this data and to process them as well.
Computing in the Clouds
Cloud computing is everywhere. Everybody talks about this and it is one of the most profitable services right now on the market. This can be seen in each quarterly reports of Microsoft (Azure), Amazon (AWS) and Google. Those numbers drive a whole industry right now. Actually, this is just means to an end. Cloud services are not only made for a better customer satisfaction, but they are meant to collect an enormous amount of data and to centralize the data majesty.
Furthermore cloud computing builds the fundamentals of all those new technologies popping up.
CRISPR/Cas 9 – What the heck?
CRISPR/Cas 9 is a technology for bioengineering that enables cutting, slicing and resequencing RNA sequences in a very short amount of time. This technology is based on a large scale computer system that calculates the cutting points for the RNA reaction of a very big data set. This has been accomplished by latest computer power on CPUs and massive parallel computing on GPUs and a very big price drop in RNA sequencing costs in the last years.
CRISPR/Cas 9 is used to fight cancer, extract HIV, selfdestruct deseases, producing super foods, creating bio fuel and a lot more.
Machine Learning
Machine learning is a collection of mathematical methods for pattern matching tasks. This is a very big field with different mathematical and theoretical approaches. The goal for machine learning is to minimize error detections. One major issue of machine learning is that data might be too big to analyze. Many scientists use therefore hand-selected features to optimize the machine learning process. This is called Feature Engineering.
Deep Learning
Deep Learning is a special form of machine learning. With deep learning we use artificial neuronal networks to accomplish machine learning tasks. Those neuronal networks have many different forms like Convolutional Neuronal Networks (CNN) and Recurrent Neuronal Networks (RNN). The specifics of those networks are a matter of a different article.
The base idea is to forget the idea of predefined features, but to let the neuronal network decide, which features are important for the task according to the prepared data set. The only tasks for the developer is to shrink the data to a usable state and to run the architecture. The Deep AutoEncoder will encode the data into the least valid representation of the task for the problem and an AutoDecoder re-decode into a real world data sample. The AutoEncoder technology has many applications like image, text and audio detection tasks. With AutoDecoders you get the ability the resynthesize images, text and audio as well for a particular task.
Artificial Intelligence
An Artificial Intelligence is a machine that mimics a human behavior. This can be probably achieved with deep learning and multiple machine learning technologies. The goal in the artificial intelligence science field is to satisfy the turing test (named after Alan Turing). In this test a machine must interact with a human being without being noticed as a machine, but accepted as a human being.
Summary
Today we are on the edge of machine learning where it jumps into almost every field of our lives very soon. We will see new bio materials, automatic driving cars, automatic flying planes, real time translators with audio in and out, auto machine controllers in plants and factory that produce goods like cars, clothes, food and will replace any easy tasks at all. All learnable tasks with a lot of data will be automated very soon. An autonomous system can run hundreds of years of training in a very short amount of time, so it will outperform any human very easy.
The most important point in using machine learning, deep learning or artificial intelligence is, that those algorithms might take a long time to learn and train neuronal networks, but have the ability to run in real time or pseudo real time.
Not yet convinced? There are impressing examples:
DeepMind plays a very complex game called StarCraft against the most skilled players in this game.
DeepMind folds RNA without prior knowledge much better than any scientific groups before.
A Tesla is driving fully autonomous on regular streets.
Where to start?
There are a lot of frameworks. Here is a list:
- Easy Start?
- use LUDWIG (Uber tensorflow based easy to use tool without coding)
- or Azure Machine Learning Studio
- TensorFlow (Python, C++ and 3rd party Java Bindings)
- TensorFlowJS (JavaScript Implementation)
- Keras (with Tensorflow 2 it is merged into Tensorflow)
- DL4J (Java)
- CNTK (used mainly on Azure)
- MXNET (used on AWS, Apache Project, Python, C++, Java, …)