My tech BFF – Trend #5 of 8 from SXSW

Making technology adapt to us

Today we are translators and must be able to continuously handle the interpretation between humans and machines. The interface is the medium between human and technology.

In the future this kind of translation may not be needed as tech is getting closer and closer to natural human behavior.

Humanizing tech is not a new thing – humanizing tech is an ongoing journey. We used to think that mobile phones were a brilliant device, then we got used to smart phones and now we are incorporating tech in all kinds of materials.

People feel really engaged to a bot that is a little bit more like them. In the future, interaction with tech must be as easy as with a human.

Laura Granka, Director and Hector Ouilet, Head of design/Google Search at Google discussed “the conversational future” and that we need to make technology adapt to us.

 

Foundations of conversation

The interaction we have with technology can be seen as a conversation. It does not necessarily need to be voice involved, it could be typing something to a computer, touch or even distributing a shift in muscle power.

Conversations are powerful because it is the way humans exchange new knowledge. Through the act of communicating – back and forth – meaning arises. And that is fundamental.

Foundations of conversation will characterize the interaction between machines and humans. It’s a back and forth conversation today as well, but it could be much easier.

We change as a consequence of the conversation and we generate new moments. How can we make this conversation as natural as person to person? An example of recent progress is from Google Duplex Voice Assistant:

Another example of recent progress is from Microsoft Cortana:

One thing is to think about how we treat our technology – how we are mediating – what’s the bridge in between us and the technology?

 

Voice and speech is a new paradigm

Speech will get better for everybody. Most people do not realize that we are on the brink of a new paradigm, just as if when we were 1-2 years into the internet.

We are now 10 years into the smartphone. Today structures are built by one company who coded up everything, but this will not work in the future. It must be an ecosystem – the world must help to develop the next level of code.

Companies are putting large investments into how we are scaling AI and voice recognition, but it needs to come into an ecosystem. For instance – how do you learn all the services in the world?

We need all the people in the world to contribute to make this possible. We will all get a digital assistant who knows our own specific preferences. This will be a paradigm more important than the web, and the mobile. We are on a verge of a real leap of machine learning.

1 in every 4 searches on Google on a mobile device is done by voice

– Gary Vaynerchuk – CEO VaynerMedia

 

The digital assistant who knows ME

Many of our interactions with technology are static, one at the time. In the future we will see digital assistants that unlike Amazon Alexa and Google assistant knows their friend or owner very well.

These kinds of assistants will for example be able to help you plan your trip to your sister’s wedding without having to tell it all the details. It will know when and where your going, where to stay, what to buy as a gift etc. The demand for digital assistants who proactively knows your preference will become bigger.

It might also be your personal shopper that will be able to find exactly what you prefer. In the beginning it will need your input of your preferences, but eventually it will learn and provide you with better and better results.

 

The new proactive conversation

Besides having all the available data of you, the services and products you use, how your schedule looks like and all your geographical positions – this new kind of digital assistant will also be more of a friend, socially.

Today, when you talk to ‘Alexa’ it is up to you to initiate the conversation, to ask her for music, to set the alarm etc. The next step, will be for these assistants to initiate conversation themselves, without you asking for it.

It can for example be a reminder that this will happen today, a description of traffic since it knows you will be driving to work by car. More proactive and more involved in your everyday life and your needs.

 

Someone is always listening

There is of course an ethical aspect to it. Do we really want someone to know everything about us, and also being able to interfere in conversations over dinner table and correct us if we say something wrong? Do we have to turn it off in certain occasions when we really do not want to get disturbed?

 

Who, what and when?

We do not yet know who will be first at developing this kind of digital assistants or if it will be a co-operation in between different companies to enable an ecosystem with a lot of different information.

Some players that were brought up in the discussion were Google, Apple, Samsung and Facebook. However it remains to be seen what they will develop and when? We certainly look forward to follow this development.


 

 

 

 

SXSW is one of the biggest digital conferences in the world, and a global meeting place for the world’s most innovative technology companies and people interested in how disruption can transform their business and everyday lives. The event takes place during during 10 days each year and this year Cartina had the chance to be part of it.

This series consists of 8 global mega trends that business leaders, experts, innovators and disruptors talked about during the days in Austin. If you want to read the full report, click the button above and we will email it to you.

 

Visst borde fler läsa detta? glöm inte att dela!

Leave a Reply

Your email address will not be published. Required fields are marked *