AI – Doomsday Prophecy Trend #1 of 8 from Web Summit

Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don’t know. 

– Stephen Hawking – Scientist, professor and author

 

This quote from Stephen Hawking at the opening ceremony of Web Summit really set the tone in many of the following discussions throughout the event. The discussions about artificial intelligence during Web Summit were pushed to its extremes; it’s either going to save the world, or going to be the end of it all.

Some consider artificial intelligence the most impactful discovery ever made. Intelligence has for a long time been perceived to be exclusive for living organisms, but that’s no longer the case due to AI.

AI was a hyped topic at Web Summit, even though it’s already integrated with many of the daily products and tasks in our everyday life, such as SIRI, Google search algorithms and face recognition.

The kind of AI that’s mainly used today is known as narrow AI, because it’s only able to complete a certain task. However, the long-term goal for many researchers is to develop general intelligence.

 

The existential risk caused by AI

While narrow AI may outperform humans in whatever its specific task may be, for instance playing chess or solving equations, general intelligence (GI) would outperform humans in nearly every cognitive task.

There’s no doubt the potential is enormous, both to do good and bad, but as Stephen Hawking said – we don’t know how AI will look in the future. But it’s right now we decide which direction it will take.

Max Tegmark, professor at MIT, has founded the Future of Life Institute that work with reducing the existential risks for human beings caused by GI.

In contrast to other impactful innovations, such as the car, we do not have any room for error when developing AI. When the automobile was invented it had many safety issues, but we handled these by coming up with new features like seatbelts and airbags.

But according to Max Tegmark one of the biggest risks with artificial intelligence, is that we don’t have the possibility to adjust any issues in hindsight.

As computers and algorithms today are programmed by machine learning without any human interaction, we are no longer in control. When machines become smarter than us, the key issue is to align the machines with our human values and goals.

 

Professor Einstein presenting at Web Summit 2017

 

Four key factors to win the wisdom race

Max Tegmark highlights four explanations on how we are going to win the wisdom race and reduce the existential risks of mankind.

 

1. Ban autonomous weapons

Autonomous weapons are artificial intelligence systems which only purpose is to kill. Countries are already developing such systems, and in the wrong hands, it could prove devastating.

There is a risk for an AI arms race that could escalate to an AI war, causing mass casualties. These weapons can be programmed to be almost impossible to turn off, placing us humans out of control.

Even though this sounds like a futuristic Terminator movie, it is a risk that is present today with narrow AI, and will grow as levels of AI intelligence and autonomy increases.

 

2. Ensure that AI-generated wealth makes everyone better off

As AI is developed, it can provide an improved living standard for all or a complete misery for an unemployable majority.

A key is to figure out how the wealth produced by AI will be taxed and shared. Many economists are worried that an increased automation will erode the middle class and the difference between rich and poor will be at an all-time high.

Ensuring that our technological developments makes everyone better off is crucial for a healthy democracy.

 

3. Invest in AI safety research

With the recent hacker attacks around the world, the risk of being dependent on IT-systems has been revealed. What if AI was in control of nuclear plants or other core aspects of our society?

We need AI-systems that we can fully trust, and can’t be hacked or crash like our computers of today. This cannot be achieved on a corporate level, and many organizations are trying to make AI safety research a national priority.

 

4. Think about what kind of future we want

The key issue regards in which direction we want the future of technology to take, meaning we must ask ourselves what kind of future we want and invite everyone to the conversation. If we have no clue about what we want, we are unlikely to get it.

Even though a key takeaway is that we must push our own cognitive abilities to the limit to really foresee the future we want and steer artificial intelligence towards it, the robot named Einstein might suggest that it won’t be easy. Humans themselves are probably a greater threat to mankind than AI in itself.

I think that robots will be able to absorb human values correctly – and that may be the problem.

Professor Einstein – Robot

 

 

 

 

Web Summit is the world’s largest tech event, and a global meeting place for the world’s most innovative technology companies and people interested in how disruption can transform their business and everyday lives.

The event takes place during a couple of days each year and Cartina had the chance to be part of it in late 2017. We really got to experience the latest innovations and meet the frontrunners within different areas of technology – resulting in these trends.

This series consists of 8 global mega trends that business leaders, experts, innovators and disruptors talked about during the days in Lisbon. If you want to read the full report, click the button above and we will email it to you.

 

Don’t you agree that more should read this? Then don’t forget to share!

William Lorenz

William Lorenz

Management Consultant at Cartina
William Lorenz

Leave a Reply

Your email address will not be published. Required fields are marked *